title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Canonical normalizing flows for manifold learning | Accept (poster) | Summary: The authors address the interesting question of how to disentangle the relevant manifold directions properly in the latent space of manifold learning normalizing flows (MLF). After a comprehensive theoretical background, the main motivation is presented in form of a toy example of a simple noisy line embedded in $\mathbb{R}^{2}$. When varying the latent dimension $z_{i}$ of a RNF and setting the remaining latent variables to $0$, the corresponding contour lines are partially aligned. Doing the same with the proposed method, CMF, leads to orthogonal contour lines and thus to a meaningful latent representation where one latent variable encodes the line and the other the (noisy) off-manifold direction. This intuition leads straightforwardly to enforcing the orthogonality of the flows Jacobian matrix. To do so, the authors propose a new cost function for learning manifold structures using an NF by adding a penalty term which ensures that the off-diagonal entries of the Gram matrix (Jacobian-transpose-Jacobian) are $0$ at the one hand, and are sparse at the other due to the l1-norm. The effecitiveness of the method is tested in experiments on toy-data, tabular data and 32x32x3 image data.
Strengths: The paper is overall well written. The theoretical background is exhaustive and well-explained. The example in 4.1 serves as a good illustrative motivation. The experiments are comprehensive.
Weaknesses: The contribution is somewhat marginal. Though citing [1], they don't mention that the corresponding authors propose a very similar penalty term. To my understanding, the only differences of the penalty term in [1] are
a) that in [1] the diagonal entries are penalized to be $1$ whereas in this work the diagonal entries are unconstrained.
b) [1] used the L2-norm whereas in this paper the L1-norm is used.
Given the great similarity, the penalty term suggested by [1] should be included in the comparison. What do we gain by not constraining the flow to be an isometry? In addition, the authors use the M-flow for benchmarking although a better method for overcoming the limitations of M-flows (namely that the log-determinant term is not considered while training) was already proposed in [2]. At least citing and ideally comparing with [2] should be part of the paper. The same is true for comparing with [3]. It would be nice to see how the contour lines differ from the method introduced in [3].
Furthermore, I am not sure to understand the reasoning behind Definition 4.1. To my understanding, every differentiable manifold is a canonical manifold as one is free to choose the basis of the tangent space to be orthogonal. Thus, such a basis always exists which is the only requirement to be a canonical manifold. Definition 4.1. is simply an existence statement and not useful as such. I think what the authors wanted to define is a manifold equipped with a chart s.t. the chart induces an orthogonal basis of the tangent space. However, this is very much the definition of the principal component flow [3]. Please enlighten me in case I totally misunderstood something.
I am also confused about the comparison with rectangular NF (RNF) in Figure 2. By definition for RNF we have that $d<D$. However, in Figure 2 the author uses $d=D$. This is a contradiction. If $d=D$, a standard NF can be used rather than an RNF. In addition, one claimed strength of the proposed CMF is the sparisity induced by the l1 norm. Thus, the relevant dimensions should be found automatically which elevates, in theory, the necessity to estimate $d$ a priori. Then, why not always set $d=D$ and then apply the proposed penalty term? The number of prominent latent dimensions should correspond to the true intrinsic dimensionality of the data.
Finally, the notation for the manifold, data manifold, and learned manifold is confusing. In line 74 or line 90, $\mathcal{M}$ is a lower dimensional manifold. In line 83, $\mathcal{M}_{\theta}$ is introduced without explaination. In line 212, $\mathcal{M}$ is referred to as the learned manifold and in the same line, the data manifold is introduced without further explanation. In addition, a new notation for a canonical manifold is suggested. I find the various versions for a manifold confusing and don't see the added benefit of it (especially given that Definition 4.1. is not useful in my opinion).
[1] Eike Cramer, Felix Rauh, Alexander Mitsos, Raúl Tempone, and Manuel Dahmen. Nonlinear 350 isometric manifold learning for injective normalizing flows. arXiv preprint arXiv:2203.03934, 351 2022.
[2] C. Horvat and J.-P. Pfister. Denoising normalizing flow. In Advances in Neural Information Processing Systems, volume 34, 2021.
[3] Edmond Cunningham, Adam D Cobb, and Susmit Jha. Principal component flows. In Interna410 tional Conference on Machine Learning, pages 4492–4519. PMLR, 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Does Definition 4.1 really makes sense (see above)
2. Is the comparison with RNF in Figure 2 fair (see above)? By definition of RNF, it must be $d<D$. Please elaborate.
3. Why not always set $d=D$ and then apply the proposed penalty term (see above)?
4. Figure 1. Why is the density varying greatly when only varying one dimension? Shouldn't be uniform if the density is properly learned?
5. What is the quality in terms of density estimation? For instance, what is the KS statistics compared to the true density, or, at least, does it sum to 1?
6. Why not benchmark with PCA flow? It would be interesting to see if this method can overcome some of its limitations (see above).
7. Moebius band with RNF seems to be poorly learned. Why?
8. Why prominent latent dimensions are the ones with the greatest $G_{ii}$? Is this a mathematical statement/proposition or intuition?
9. line 305: in which case would an orthogonal basis not be an optimal representation?
10. What happens if we use the L2 rather than L1 norm?
11. How are prominent latent dimensions defined? Do they correspond to the true intrinsic dimensionality? Is there a clear cut-off in the magnitudes of $|G_{kk}|$ in e.g. MNIST? If not, a somewhat arbitrary threshold needs to be defined, isn't it?
*Minor*:
+ line 70-72: the statement is only true for large $n$
+ line 83: What is $\mathcal{M}_{\theta}$?
+ line 83: "to encorage $x\in \mathcal{M}_{\theta}$" do you mean $g_{\phi}(z)$ rather than $x$?
+ line 114: typo: "due to lack..."
+ line 145: noise is not perpendicular to line
+ line 217: mention reason (maybe in a footnote)
+ line 225: $\gamma$ not crucial but had an impact on training time? What is crucial for you?
+ line 244: bad wording "the expected the canonical manifold learning"
+ line 297: Inconsistent acronym: M-flow vs. Mflow
+ line 299: inferior lok-likelihood calculations of Mlow...where shown?
+ line 309/310: wrong statement: there are diffusion models with lower dimensional latent space. However, by default indeed the latent space has the same dimensionality
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Major limitations are addressed in Section 6. However, some questions are unresolved such as: how to find/define the prominent latent dimensions without an intrinsic dimensionality estimator?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: We appreciate the reviewer for their insightful comments. We would like to emphasize that calculating L1 for the off-diagonal elements is not a trivial conceptual step while it may seem like a simple implementation detail, and this distinction mitigates many of the drawbacks of previous methods in a simple yet theoretically grounded way. For instance, in [1], an isometric embedding —essentially a predefined constrained transformation— is presented. Although it allows for direct density estimation and is a form of PCA, its expressivity is greatly limited, as evidenced by the authors' attempts to address these limitations using additional components like I-AE. Indeed, standard approaches often involve focusing solely on the diagonal elements or strictly enforcing the off-diagonals to be zero, as in [3]. However, these methods come with constrained expressivity.
As detailed in the manuscript, the utilization of the L1 norm on the off-diagonal elements facilitates sparse learning and/or variable (non-strict) local orthogonality. This means that the chart's transformation can be significantly non-linear while retaining some of the benefits of orthogonal representations. Note that, the orthogonality does not mean a global coordinate system, only a local one. On the other hand, the L2-norm is used to match the metric to $\mathcal{I}$, ensuring an isometric embedding—an entirely distinct concept. As a result, we posit that our contribution is substantial not only as an independent method but also as a prototypical idea for optimization problems.
W2: Indeed, a comparison with [1] should be incorporated into the relevant work section in the revised manuscript. Thank you for pointing this out. That said, given that our primary aim was to establish a highly expressive transformation devoid of heuristic solutions, which aligns with the core ideas of manifold learning flows, we focused on the most relevant part of the literature for brevity.
Indeed, [2] represents a progression beyond M-flows, while RNFs directly compute the JtJ term that M-flows neglect, overcoming the aforementioned limitation. RNFs can be seen as a parallel approach to DNF [2] but with a more direct methodology, circumventing potential ambiguities stemming from heuristic techniques like density denoising [2]. [2] will also be cited in the revised version, thank you for pointing this out.
In reference to [3], PCA flow relies on a lower-bound estimation of the probability density to bypass the JtJ calculation. This bound is tight when complete principal component flow, as defined by them, is achieved. The fundamental distinction here is that CMF does not confine itself to a pure PCA flow scenario, which has the potential to restrict expressivity. Since our method only loosely enforces orthogonality, or mutual information $\rightarrow 0$, we anticipate the contour lines to lie somewhere between NF and PCA flow. Furthermore, the introduction of sparsity adds an extra layer of complexity, making direct comparison challenging. As an outlook, we acknowledge that [2] and possibly [3] see Q6, can undergo quantitative comparison with our method. However, at present, due to time constraints and the necessity for GPU cluster maintenance, we have been unable to carry out such comparisons.
W3: It is accurate that any Riemannian manifold can be characterized by an orthogonal local basis. We merely use 4.1 to precisely define the term "canonical manifold", given that it is no a standard term in literature. We also include sparsity as part of the definition.
Generally, the meaning of "canonical manifold" can vary depending on the context in which it is used. It is astutely pointed out that Definition 4.1 aligns closely with the assumption of PCA flow. However, it is vital to note that our method does not enforce this strictly. To put it in perspective, our approach seeks a 'partly canonical manifold,' if you will. Additionally, our method encompasses sparsity—meaning the diagonal elements can also approach zero.
W4: Excellent remark, yes, we do employ the full dimension, D=d, for the low-dimensional simulated data, and this choice serves illustrative purposes. First, it allows us to visualize what all the latent dimensions are doing in relation to what we expect them to do. In particular, CMF correctly uses only 2 latent dimensions for representing 2D surfaces even though it has 3 latent dimensions. Second, it clearly shows the advantage to RNFs for the case where dimensions of the embedded manifold are known. In principle, that can be a solution for higher-dim data. However, in practice it is very computationally expensive to train a full latent dimension (e.g. JtJ calculation) and the solution will take long to converge. Additionally, empirical knowledge suggests that a lower-dimensional representation can enhance expressivity, as discussed in the context of M-flows. Consequently, in practice, one can start from a predefined lower-dim and let the network optimize at will.
W5: We apologize for any confusion cost. At Line 83 the ``$\theta$'' is a typo. $\mathcal{M}$ represents the lower dim manifold as learned by standard manifold learning, and $\mathfrak{M}$ the manifold learned by our method this convention is used to distinguish them, as done in lines 212 and 213. The data manifold is also a distinct manifold in that regard. %If such notation is confusing, we can simplify and just use words instead of the symbols. We estimate that 4.1 is useful for the reasons elaborated above.
Q1: 4.1 is used to define the canonical manifold (see W3).
$*$Kindly refer to the global rebuttal for the continuation of the reply
---
Rebuttal Comment 1.1:
Title: Answer
Comment: I appreciate the detailed answers of the authors. I am mainly satisfied with them, and they help me to further appreciate the conceptual contribution of the paper - in particular the usage of the L1 error and the corresponding advantages. I will adjust my score accordingly.
Regarding my questions:
*Q2*: I'm afraid I disagree. The authors of "Rectangular Normalizing Flows" define them only for $d<D$. If $d=D$, an RNF should result in a standard NF. Please correct me if I am wrong. However, using a standard NF on a hollow sphere or Moebius band, Figure 2, should lead to very different results (the density must, in fact, degenerate unless you add some noise, which you did not mention).
*Q3*: I don't quite see why this is computationally more expensive. The only computational impact will have the computation of the metric tensor. However, this only requires a backward pass which is efficient to compute. Please elaborate/correct me.
*Q8:* This intuition should be part of the discussion in my opinion.
*Q11: " For the analysis plots, we order the dimensions according to their weights and strictly choose a pre-defined number of them where applicable. "* So you need, in the end, an ID estimation. If so, this should be also clearly discussed in the limitations.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the valuable insights offered in the review and the subsequent responses.
Q2: Indeed, it is right that it would be equivalent to a standard NF. Moreover, as aptly noted, effectively learning these manifolds remains a challenge, even with the full-dimensional latent space. However, the RNF is based on CIF [5] which is an NF that already tries to solve the learning pathologies of complicated manifolds, "CIFs are not subject to the same topological limitations as normalising flows". Furthermore, in that regard, we estimate there is no practical limitation in setting $d=D$, as also seen empirically. Considering the above, the CMF method is showcased to outperform these previous methods, which also aligns with the theoretical intuition. We extend our appreciation to the reviewer for the thought-provoking discussion, and we'll certainly incorporate a comprehensive comment in the revised manuscript to reflect this exchange.
Q3: The bottleneck for both methods is the calculation of the Jacobian-transpose-Jacobian (JTJ), as explained in Appendix 4 and RNF. The complexity of the approximation used is $\mathcal{O}(id^2) < \mathcal{O}(d^3) $ if $i << d$, $i$ refers to iterations of the iterative conjugate gradients method. M-Flow seeks to circumvent this computation entirely, while RNF aims for an approximation. Notably, as accurately noted, the JTJ yields the metric tensor directly, incurring no significant additional computational expense. When $d=D$, as previously discussed, this equates to a standard NF. In this regard, manifold flow methods were prompted to enhance computational efficiency by setting $d<<D$.
Indeed, in alignment with the reviewer's observation, and as addressed here, the efficient approximate methods indicate that CMF could indeed be implemented for a full-dimensional flow. Further exploration of this potential could be undertaken in future studies. We will ensure to provide a diligent commentary in the revised manuscript.
Q8 and Q11: We wish to clarify that this is solely necessary for Figure 4 and potentially for any future use that entails isolating prominent components. In this context, we sincerely appreciate the suggestion, which undoubtedly represents a substantial improvement over our current approach of analysis. Rest assured, we will incorporate these discussions into the updated version of the manuscript.
[5] Relaxing Bijectivity Constraints with Continuously Indexed Normalising Flows
Rob Cornish, Anthony L. Caterini, George Deligiannidis, Arnaud Doucet | Summary: The paper studies the current manifold learning methods. It compares canonical manifold learning flow (CMF) with other manifold learning methods and demonstrates that CMF can learn the orthogonal features existing in data. From synthetic data on Moebius, the paper shows the benefits of using CMF. Also, the paper shows the generated images from the CMF learned latent space is of higher quality than other methods' learned latent spaces. Lastly, the paper acknowledges some limitations of CMF, for example, high computation cost of the full Jacobian tranpose Jacobian.
Strengths: 1. The paper is very clear on the advantages of CMF against other manifold learning methods.
2. The paper did various types of experiments to showcase the CMF's performance among the methods.
Weaknesses: 1. Although being mentioned in the limitations, the paper does not provide results about the computation costs of CMF. There is no comparison of the computation time of CMF against other methods, also how the dimension affects the computation cost. It is unclear if CMF is scalable to higher-dimensional datasets.
2. The paper does not compare CMF with a lot of methods, including PCA and ICA or purely deep learning methods. For example, an autoencoder can also extract feature representations from data. How does CMF compare to these approaches?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the complexity of the RNF and CMF methods?
2. For table 2, are there explanations on why M-flow works the best and CMF is the worst on GAS?
3. Figure 3 is hard to interpret. What are the advantages of CMF in this plot?
4. What are the generative models used in the image experiments?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations clearly in the paper. They acknowledge the high computation costs and the limitation that the method only applies when the manifold is homogeneous to Rd space.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: We appreciate the reviewer for highlighting this matter. A comparative table can be found in Appendix 6 of the supplementary material. To summarize, there exists no substantial computational cost distinction between the RNF and CMF methods, as both encounter the calculation bottleneck of the Jacobian-transpose-Jacobian. Additionally, results from an initial CelebA 64x64 training have been included in the supplementary response PDF, indicating that CMF is scalable to higher-dim datasets with promising results. Currently, manifold learning flows are computationally demanding; nevertheless, their potential is noteworthy due to their mathematical soundness, particularly in the case of methods such as RNF and CMF, which do not implement any fundamental approximations in the density calculations using lower dimensional latent space.
W2: We believe this question is rather related to general manifold flow literature and not specific to our work. Linear PCA / ICA would fail when you apply them to data living on non-linear manifolds, for instance those shown in the toy examples. Non-linear PCA or non-linear ICA may be applied but one needs to identify the feature extractor or the kernel. Manifold learning methods come to aid at this point. They learn the non-linear transformation from the data. The method presented here falls in this category. Regarding comparing with AutoEncoders, AE's would not learn data distributions, e.g., they cannot be used to sample or compute likelihoods. Manifold flow learning techniques, to which our method belongs to, fall in the category of methods that approximate distributions. Furthermore, manifold learning flow methods are relatively new and in that regard there are only a couple of prominent works.
Q1: The bottleneck for both methods is the calculation of the Jacobian-transpose-Jacobian, as explained in Appendix 4 and [1]. The complexity of the approximation used is $\mathcal{O}(id^2) < \mathcal{O}(d^3) $ if $i << d$, $i$ refers to iterations of the iterative conjugate gradients method and d to the latent dimension.
Q2: We appreciate the reviewer's comment. As indicated in the manuscript, we calculate the average best FID score from an average of 5 simulations for each tabular dataset. We have conducted an additional GAS tabular experiment and present the results in an updated FID score table for tabular datasets, accessible in the supplementary PDF. Notably, the performance ordering has shifted, indicating possible inherent challenges in learning the dataset, which might introduce randomness in the outcome. However, it's worth mentioning that all other tabular datasets maintained consistent performance orderings across methods, so we haven't included them here.
To delve deeper into this matter, we visualize the validation FID-like scores during training, available in the supplementary PDF. Upon comparing the results of Hempmass and GAS, it's apparent that learning converges until a certain point, after which instability may arise for all three methods. Despite this instability, it's important to note that our implementation selects the best validation step checkpoint for testing, mitigating the impact of training instability. However, it's plausible that these instabilities contribute to the variations between different runs. Furthermore, we have introduced another tabular dataset, namely Miniboone, as well as we run experiments with double the latent dimensions as compared to the original experiments, all with promising results.
Q3: Figure 3 demonstrates the realization of sparse and partially orthogonal learning, as per hypothesis of the proposed method, when compared with the original approach. Additionally, when these findings are coupled with the better FID scores for the image datasets and the outcomes from the simulated data, it can be deduced that CMF is opting for a more ``efficient'' use of the latent space, in other words enforcing orthogonality between dimensions and not putting any weight on unnecessary dimensions. We acknowledge that visualizing high-dimensional data always presents a challenge.
Q4: We wish to clarify that the CMF framework functions as an independent generative model. It approximates the data distribution with a low dimensional latent space. The learned manifold pertains to the lower-dimensional representation of the data, which is subsequently mapped back into the image space, allowing direct sampling from the latent space for image generation. The essence of this work is to demonstrate that by acquiring a more effective latent representation, we achieve a more successful generative model than its precursors, M-flow and RNF!
[1] Rectangular Flows for Manifold Learning Anthony L. Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, John P. Cunningham
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions and concerns. I am more leaning to accept now and raised the score to 5.
---
Reply to Comment 1.1.1:
Comment: We would like to express our gratitude to the reviewer for their evaluation. | Summary: The authors introduce a new method for regularising manifold learning flows. Essentially, it attempts to reduce the entanglement between the dimensions of the learned manifold by encouraging non-diagonal elements of the metric tensor to be small. Leveraging the already necessary computation of $J^\top J$, where $J$ is the Jacobian of the flow transformation, this is done efficiently by minimising the $\ell-1$ norm of the non-diagonal entries.
Experiments on synthetic and real data show how this is effectively achieved, especially when compared to similar models that do not employ the regularisation scheme. Beyond achieving the desired effect, the proposed method achieves lower FID-like scores on real tabular data.
Strengths: The paper is very well written and the presented ideas are easy to follow. Experimentally, the authors confirm both in intuitive, synthetic examples and real data. The proposed regularisation scheme is also simple and computationally efficient, which is also a desirable feature. Overall, I consider it a strong contribution and see high potential of being widely adopted as a mechanism for regularising manifold learning flows.
I also appreciate the limitations raised and discussed by the authors, indicating maturity in their analysis and raising important considerations related to the use of their work.
Weaknesses: Although I see the potential mentioned above, I do consider it could be viewed more widely as a weak point, given the relatively niche application. Although it does not affect the merit of the work, the principle might be too specific to manifold learning flows.
I would consider more tabular data experiments are needed, with additional data sets, given the mixed results attained. More concretely, some analysis of what specifically differentiates the data sets enough to cause the difference in performance.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Tied to the aforementioned weakness, what do the authors posit is the cause for the performance gap in GAS?
- Related to the previous question: are the metric tensors found in tabular data also behaving similarly to the ones shown in Figure 3?
- Is it clear during training when a pathological scenario is reached? How does it behave w.r.t. increasing/decreasing the value of $d$?
- Are the learned manifold representations also useful as a feature extraction procedure for downstream tasks? Could this be also a scenario for evaluating the quality of the learned models? (As opposed to only raw performance in scores)
Minor comments:
- line 35: Caterini et al [7] has => Caterini et al. [7] have
- line 44: Please avoid unnecessary adjectives such as "tedious"
- line 62: why is the set starting with $x_0$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I was pleasantly surprised with the discussion on limitations in the paper. I believe the authors addressed important issues and even highlighted which other approaches could have better raw performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our gratitude to the reviewer for their precise summary.
W1: We thank the reviewer for the insightful remarks. Indeed, we have developed the method in the context of manifold learning flows in order to solve an existing pathology as well as dis-entagling the latent space. While being specific at the moment, it is our estimate that there is a prototypical idea and thus of some theoretical value as well. Specifically, we propose and show that the off-diagonal manifold metric elements when minimised by an L1 loss allow simultaneous sparse and/or orthogonal basis. This idea can be used in other optimization schemes.
W2: We thank the reviewer for the suggestion. We carried out one more tabular data experiment, experiments with twice the size of latent dimensions and repetitions of the experiments for understanding. We show the results in the supplementary PDF. See also Q1 for discussion on these and their training curves.
Q1: We appreciate the reviewer's suggestion. As indicated in the manuscript, we calculate the average best FID score from an average of 5 simulations for each tabular dataset. In response to your suggestion, we conducted an additional GAS tabular experiment and present the results in an updated FID score table for tabular datasets, accessible in the supplementary PDF. Notably, the performance ordering has shifted, indicating possible inherent challenges in learning the dataset, which might introduce randomness in the outcome. However, it's worth mentioning that all other tabular datasets maintained consistent performance orderings across methods, so we haven't included them here.
To delve deeper into this matter, we visualize the validation FID-like scores during training, available in the supplementary PDF. Upon comparing the results of Hempmass and GAS, it's apparent that learning converges until a certain point, after which instability may arise for all three methods. Despite this instability, it's important to note that our implementation selects the best validation step checkpoint for testing, mitigating the impact of training instability. However, it's plausible that these instabilities contribute to the variations between different runs. Furthermore, following your suggestion, we have introduced another tabular dataset, namely Miniboone.
Q2: Indeed, they behave similarly. We visualize these for the tabular datasets trained with sufficiently high latent dimensions and present the mean absolute cosine similarity (MACS) in the supplementary PDF. For the specific case of GAS with d=2, the plot might not offer significant insights. However, we report here the MACS for RNF as $3.16 \pm 0.7$ and for CMF as $2.16 \pm 0.2$. We observe that the MACS is lower for CMF, although, considering the error, there remains some overlap, which is within expectations.
Q3: We conducted training for the tabular datasets with twice the number of their respective latent dimensions, as compared to the original, see Appendix. The outcomes are available in the provided PDF. Notably, we haven't observed any substantial differences. Furthermore, the training curves follow similar behaviours, converging up to a certain point and then displaying a degree of instability. This trend holds true for all three methods when trained on the GAS dataset.
Q4: That is part of the main outlook of this work. For example, improved out-of-distribution detection could be a consequence of the feature extraction, some preliminary but promising results have been included in the supplementary material of this work. Additionally, data that require orthogonal basis vectors, like solutions of a many-body-physics quantum Hamiltonian, can show improved learning performance with the current method.
We acknowledge the minor corrections and will incorporate them into the revised version of the manuscript. The reviewer is correct, the typo $x_0$ should be rectified, and the set should commence with $x_1$ for consistency.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing all issues I raised. After reading the other reviews and rebuttals, I am increasing my score to 7 (Accept), as I no longer think there are any outstanding issues with the submission.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for their valuable comments. | Summary: This paper studies the problem of learning a latent representation for data supported on a low-dimensional manifold. It proposes to promote orthogonality of the tangent vectors arising from a learned chart, on top of existing rectangular flow loss. Experiments are provided to demonstrate the effectiveness of the algorithm.
Strengths: Promoting orthogonality of the tangent vectors from a learned chart is interesting.
Weaknesses: ## Presentation:
1. The paper seems insufficiently prepared and proofread. In particular, there are numerous evident typos even in the first paragraph of the introduction. Consequently, they weaken the credibility of the paper.
1. Lines 29, 33 the "D" in "R^D" uses a mathbb, whereas in lines 31, 32 the "d" or "D" does not.
1. Line 29 “fulfil” -> “fulfill”
1. In equation 1, the q_phi(x) should have been q_phi^{-1} (x)
1. Section 3 talks about general manifold learning, and is almost detached from the technical sections 2 and 4.
2. Line 43: A line is one-dimensional. What is a two-dimension line?
3. If one compares equations (5) and (6), it appears that we assume the learned map q_phi is a chart of manifold M. If that is the case, this should be made explicit.
4. From (8), it is unclear that where the introduced G term are evaluated. I suppose it is on points x_i’s?
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: ## Significance
Looks like the inverse of g_phi is in objective (6), so using line 75 it boils down to the inverses of g_eta and f_theta. How easy is it to invert g_eta and f_theta? The root of this question is, if one takes f_theta to be a square matrix, then inverting the square matrix is needed, leading to high cost if the dimension of the square matrix is large. If f_theta is a square matrix followed by say ReLU, then ReLU is not invertible.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's suggestions. While we acknowledge that different presentation approaches could have been explored, we believed that, given space constraints, the current explanation best encapsulated the work. We are encouraged by the fact that the presentation is also accepted by the other reviewers. We apologize for any typos or language errors and commit to rectifying them diligently in the revised paper.
1.1 Indeed, there was bracket typo in latex there.
1.2 In British English it is indeed "fulfil", but of course the convention is American English, we will fix this.
1.3 Apologies, for the typo.
1.4 Our method is based on manifold learning flows; consequently, the first paragraph is dedicated to that. Furthermore, the rest of the section, and indeed the majority of it, addresses the relevant work that inspires our method or explores similar themes. This includes the Relevance Vector Machine, which motivates sparse learning, PCA and ICA methods that emphasize orthogonality, tabular neural networks which implement a similar idea to ours to constrain a specific NN, and PCA flow which explores orthogonal contour learning for flows.
2. Two-dimension here refers to the length and width of this 'fuzzy line'. Indeed, a line is a 1D object but the "fuzzy line" we use here has 2D noise added to the line.
3. Line 172, just above equation 6 we explicitly specify 'the transformation of the chart'
4. This is correct, G is depended on x and the parameters of the network, we understand this is explicitly defined by equations (6) and (7), of course we can include this in G or explicitly specify that the dependencies are dropped for brevity.
This is indeed a fundamental property of invertible flows, and it has ultimately been tackled through various methods, such as 'coupling layers' [1], in the last 5-10 years. Additionally, the inconsistency with the activation function is an acknowledged limitation of such approaches that has already been addressed over these years [2]. For instance, when the ReLU activation is utilized in a normalizing flow, it is typically combined with an element-wise scale and shift transformation (affine transformation) to ensure invertibility. The original flow part of our work builds upon these prior contributions and more recent developments [3].
[1] Laurent Dinh, David Krueger, Yoshua Bengio NICE: Non-linear Independent Components Estimation
[2] Real NVP: Density Estimation and Inverse Problems Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio
[3] Variational Inference with Continuously-Indexed Normalizing Flows Anthony Caterini, Rob Cornish, Dino Sejdinovic, Arnaud Doucet
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
1.2: I apologize for being ignorant of the spelling - British English, as any other English, is perfectly fine.
1.1-1.3: My intention was not to be nit-picking. Everyone makes typos and mistakes, and that is what proofreading is for. Having obvious typos in early parts of the paper is a sign of a lack of proofreading, thus it lowers the credibility of other parts that are harder to check.
Nonetheless, after reading the response and other reviews, I raised my rating.
---
Reply to Comment 1.1.1:
Comment: We extend our appreciation to the reviewer for offering clarifications and investing the time to re-examine our work. | Rebuttal 1:
Rebuttal: We express our gratitude to both the reviewer and the chair for their valuable time and insights. We have diligently addressed each of the reviewer's comments individually. Furthermore, we have expanded our testing to encompass additional tabular datasets and incorporated CelebA, 64x64 FID test scores for a more comprehensive evaluation.
$........................................................................................$
In order to adequately address the points raised in the extended review by reviewer 4 (8w9Q), we make use of this space for further discussion.
Q2: The comparison is indeed fair as they are both manifold learning flows, nothing restricting the use of $d=D$. Furthermore, similar results can be obtained by embedding a 2-sphere in a 4D space and setting $d=3$, or just the orthogonality can be see by setting $d=2$. Additionally, the arguments about lower-dim expressivity are not so relevant for such low-dim data, see W4.
Q3: Excellent remark, it is mathematically sound but computationally expensive and possible convergence issues, see W4.
Q4: Yes, exactly for this reason, CMF is shown to be learning better. Perfect density estimation is not a trivial task from our experience.
Q5: The log likelihood (L) is 1.6553 and 1.6517, and the KS p-value is 0.17 and 0.26, for the CMF and RNF methods respectively. For the sphere, L=1.97 and 1.16 for for the CMF and RNF methods respectively. CMF shows improved quantitative quality over the RNF, as it is also seen qualitatively.
Q6: PCF [3], mentions that they obtain a similar test set log likelihood to that of the normalizing flow (NF), and that the PCF has almost zero pointwise mutual information. Additionally, the M-Flow method shows already improved image generations over other flow methods, and RNF improves on those even further. Therefore, we had estimated that RNF is the main competitor in this regard. Furthermore, there can be no real comparison of the mutual information (or similarity etc) as the PCF relies on fully orthogonal contours, which is not necessarily a desirable quality in the CMF framework as explained in W1. Nevertheless, a comparison can always be made. However, due to time constraints and GPU cluster maintenance, it is currently pending.
Q7: We appreciate you bringing this to our attention. Our methodology was to run about 10 repetitions for each and chose the best. Our estimate is that it is a hard manifold with non-trivial topology that degenerate latent-space representations fail to capture it well.
Q8: An excellent question, it is more an intuition that the largest eigenvalues (if an eigen-decomposition is possible) correspond to the dimensions with the highest weights.
Q9: Enforcing strict complete orthogonality can restrict the expressivity of the transformation in the case of high dimensional complex data where the latent manifold is completely unknown.
Q10: L2 with the identity matrix can be used to enforce a specific embedding, e.g. isometric [1] or conformal [4] which is not the purpose of this work, we want the learned transformation to be general as explained in W1 as well.
Q11: We appreciate you bringing this to our attention and allows us to clarify that the method does not arbitrarily discard any dimensions, rather they would be effectively irrelevant if their weight is close to zero. For the analysis plots, we order the dimensions according to their weights and strictly choose a pre-defined number of them where applicable. Thresholds would be very hard to define indeed.
Minor: We acknowledge, and we will diligently rectify, all the minor issues in the revised manuscript. Notes: line 83: $\theta$ was a typo. line 225: We mean that the method is not overly sensitive to the hyperparameter $\gamma$, we will rephrase, thank you. line 299: This is the main premise of RNF and Denoising Normalizing flows, also this limitation is accepted in the original M-Flow paper.
Lim: As elaborated in Q11, there is no need to find the prominent latent dimensions specifically, their weights define their significance as adjusted by sparse learning and/or orthogonality.
[4] Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows
Brendan Leigh Ross, Jesse C. Cresswell
Pdf: /pdf/62f3d75cabfe632098b63c13a8d053bce393d048.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Epidemic Learning: Boosting Decentralized Learning with Randomized Communication | Accept (poster) | Summary: The authors study the benefits of using randomized communication topologies for decentralized optimization of non-convex functions. Critically, the communication protocols studied are not based on picking/sampling a communication graph that remains fixed throughout learning. Instead the authors study the case where the nodes randomize their communication patterns with the rest of the nodes in each round. Based on this premise they propose two algorithms, EL Oracle and EL local and study their convergence rate. The prosed algorithms are shown to converge asymptotically faster than state of the art. Experiments verify the theoretical advantage empirically.
Strengths: The paper communicates clearly and precisely its contribution. Despite extensive work on decentralized algorithms, including randomized and fixed topologies that can vary over time, to the best of my knowledge the proposed algorithms are novel. Importantly, the authors note that time varying randomized topologies have not been shown to have an advantage over their static counterparts.
Weaknesses: I am not 100% sure which quantity is bounded in (3) and if it is supposed to be comparable to the rates provided in Theorem 1. The paragraphs after (3) list various ways EL Oracle and Local improve over (3) but I cannot say I follow them. For example, the first term in EL Oracle in Theorem 1 and (3) are identical but the comments mention an advantage in the first term. Clarifying this would help.
In its current state, the paper is not adequately explaining which part of the analysis unlocks the potential of non-static randomized communication graphs. I think highlighting which parts of the analysis are critical would help. Intuitively, I would expect that improving communication would have an effect similar to decreasing the variance $\sigma$ or improving $p$ but it seems like this is not true.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Another thing that was unclear to me is how is this improvement affected by the fact that we are averaging iterates $x$ instead of gradients. Does the advantage persist in this case? In the case where we are averaging gradients I have the intuition that it would be harder to show an advantage but I am not sure.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations to address.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide a detailed rebuttal below.
___
> Q: Which quantity is bounded in (3)? The first term in EL Oracle in Theorem 1 and (3) are identical but the comments mention an advantage in the first term. Clarifying this would help.
Re: The quantity bounded in (3) is the average of the norm of the gradient which is exactly the same as Theorem 1 and (3) should be compared with bounds provided in Theorem 1. However, there seems to be a small misunderstanding regarding our comparison. We do not claim an advantage in the first term. In line 212, we mention that our algorithm preserves linear speed up which is the same as the bound in (3). In fact, it is not possible to outperform this term as it is the same term even for running a centralized SGD on a single node. Preserving linear speed-up may be surprising for EL-Local as in this case, the maxing matrix is not doubly stochastic.
We claim our bound to have a better second error term which is the main error term when comparing different decentralized learning schemes. Please see our explanation regarding "Transient Iterations" in the global response at the top of the page.
___
> Q: Which part of the analysis unlocks the potential of non-static randomized communication graphs?
Re: To demonstrate the superior convergence of our EL approach, our analysis proceeds in two main steps.
1) The first step is established in Lemma 1, where we demonstrate that the randomized communication within EL achieves the property of fast mixing, with a mixing coefficient ($\alpha_s$ or $\beta_s$) of $\mathcal{O}(\frac{1}{s})$.
2) The second step was tightening the conventional analysis of decentralized learning. Let $\beta = 1 - p \in [0,1)$ be the mixing coefficient (a.k.a. consensus rate [23]). Existing analyses of decentralized learning propose a convergence rate (as per Equation (3)) of:
$$\mathcal{O}\left(\sqrt{\frac{L \Delta_0 \sigma^2}{n T}}+\sqrt[3]{\frac{L^2 \Delta_0^2\sigma^2}{(1-\beta) T^2}}+\sqrt[3]{\frac{L^2 \Delta_0^2\mathcal{H}^2}{(1-\beta)^2 T^2}}+\frac{L \Delta_0}{(1-\beta) T}\right).$$
Here, $\beta$'s impact on the convergence rate only appears in terms formatted as $\mathcal{O}\left(\frac{1}{(1-\beta)^k}\right)$. Therefore, as long as $\frac{1}{1-\beta} \in \mathcal{O}(1)$, diminishing $\beta$ does not affect the order of the convergence rate. Hence, e.g., $\beta = \frac{1}{2}$ and $\beta = \frac{1}{100}$ would have identical convergence rates (with both yielding $\frac{1}{1-\beta} \in \mathcal{O}(1)$ and the exact value of $\frac{1}{1-\beta}$ varying by a factor of less than 2). This suggests that we cannot achieve a convergence rate better than the existing state-of-the-art solutions with $\frac{1}{1-\beta} \in \mathcal{O}(1)$. However, we observed that this limitation arises from a looseness in the conventional analysis. More specifically, the second and third terms in the above expression can be tightened such that they go to zero as $\beta$ approaches zero (see Theorem 1, and the discussion below the theorem). This adjustment allows us to showcase the benefits of EL, which maintains a $\beta$ in $\mathcal{O}(\frac{1}{s})$.
___
> Q: What happens if we average gradients instead of models $x$?
Re: In the context of decentralized learning, unlike federated learning, averaging gradients may lead to divergence among local models of nodes. As a result, both our work and the existing related works we compared against employ the strategy of averaging the local models.
We agree that investigating the effects of averaging gradients in the context of decentralized learning could be an interesting avenue for future research. However, it is important to note that doing so necessitates the introduction of additional mechanisms to prevent model drift, given the potential divergence issue.
---
Rebuttal Comment 1.1:
Title: My questions are answered
Comment: I would like to thank the authors for their thorough responses. I would suggest adding at least a summary of this discussion in the main paper. I keep my score and acceptance recommendation as is.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their comments and response, and are glad to hear that the reviewer recommends accepting our work. We will add a discussion to the camera-ready version where we will have an extra page. | Summary: This paper proposes a decentralized learning algorithm based on random communication, i.e., each node sends its model to a random set (with a fixed size) of other nodes at each round. This paper theoretically shows the superiority of random communication in terms of transient iterations over other decentralized algorithms, which is further validated by experiments.
Strengths: 1. This paper provides a simple yet effective scheme of communication for decentralized learning.
2. This paper is technically sound. The technique developed for convergence analysis is interesting.
Weaknesses: 1. In the proposed algorithm, each node is required to send message to a fixed number $k$ of random neighbors at each round. In practice, this may only be applied to networks with high connectivity (e.g., fully connected), and other application scenarios need to be motivated. In addition, the theoretical superiority of convergence directly depends on $k$, so it might be unfair comparing with other algorithms which conventionally works on arbitrary connected graphs. Can the restriction on $k$ or uniformly random selection be relaxed a little bit, say, allow to communicate with varying number of neighbors?
2. Although the authors conduct an experiment to explain the imbalanced load on different nodes, I still have a concern that the balanced pattern reported in this paper may rely on the uniformly random selection. Such an assumption may not be aligned with the real cases, e.g., the degrees of different nodes vary, then some nodes may have higher load than others under random selection. This problem may arise from the non-doubly stochasticity of the mixing matrices. Could other methods handling asymmetric communication, such as push-sum, be helpful for load balancing?
3. Figure 4 shows that EL outperforms the baseline algorithm throughout the learning process, which seems inconsistent with the theoretical statement that the superiority of convergence lies in transient iterations, i.e., the early stage. As the learning proceeds, the convergence will be dominated by the first term. So I suppose the test accuracy of the various methods may get closer with large $T$?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have been properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide a detailed rebuttal below.
___
> Q: Only applicable to networks with high connectivity. other application scenarios need to be motivated.
Re: Thank you for your insightful comment.
You correctly identified that the implementation of our decentralized scheme is built around the condition that all nodes can communicate with each other.
This is similar to the assumptions of the EquiTopo topologies [51], a competitor baseline.
However, we argue that the connectivity requirement is a bit more lenient, allowing our model to function in a wide range of practical scenarios.
In data center settings it is common to train on clusters of highly interconnected GPUs, and all-to-all communication should be achievable in these settings.
In edge settings, e.g., a network of mobile devices collaboratively training a model while keeping private datasets, the communication barrier might appear more substantial.
Nonetheless, Internet networks are generally well-connected, which mitigates this concern.
More importantly, from a practical point of view, even if pairwise communications encounter some barriers, the decentralized and randomized nature of EL-Oracle and EL-Local should still allow for effective model learning and convergence.
The occasional lack of communication between specific nodes should not significantly impact the algorithm's performance, as model updates are still propagated through other communicating nodes, as long as the network is not partitioned.
EL is most useful in scenarios where every pair of nodes can communicate, but the total communication budget is limited.
Our randomized communication scheme allows for efficient use of the limited resources while ensuring faster model convergence than conventional decentralized learning approaches.
We will incorporate these explanations into the camera-ready version of the paper to clarify the assumptions and practical implications of our work.
___
> Q: The theoretical superiority of convergence directly depends on $k$.
Re: Indeed, the superiority of our convergence rate is directly linked to the number of random neighbors $k$ (or $s$ with the notation used in the paper), providing a clear trade-off between convergence speed and communication complexity. This flexibility is a distinguishing aspect of our approach.
Specifically, we obtain a state-of-the-art convergence rate for $k = 1$, with a number of transient iterations in $\mathcal{O}(n^3)$, and the rates can be further improved by increasing $k$. While there are existing works that allow several communication levels [51, 55], unlike our method, their convergence rate does not enhance by augmenting the communication budget.
To obtain this improved convergence guarantee, we needed to enhance the convergence analysis, as the existing analysis suggested that it would not be possible to further improve the convergence rate over the state-of-the-art results and to obtain a number of transient iterations less than $\mathcal{O}(n^3)$ even for communication graphs with high connectivity. For a more detailed theoretical discussion on how we addressed this, please refer to our response to the last concern of reviewer f2yu.
___
> Q: Can the restriction on $k$ or uniformly random selection be relaxed, say, allow to communicate with varying number of neighbors?
Re: This is an interesting idea and we can give each node a "personalized" fan-out value based on characteristics of that node, such as bandwidth capabilities or training speed.
In the current form of our algorithm, setting $k$ differently for different nodes would bias the model to fit better the data distribution of nodes with a higher fanout. To solve that issue, one needs to give more weight to the updates coming from less active nodes.
Since this requires significant modifications to our analysis, we consider this extension beyond the scope of our work.
It is also worth noting that even though there are some convergence results for arbitrary graphs [23], these results often depend on the spectral gap of the underlying graph, which is a very challenging parameter to compute for an arbitrary graph. As a result most of the theoretical works often rely on graphs with balanced degrees such as ring, torus, grid, exponential and EquiTopo graphs.
___
> Q: Imbalanced communication load:
Re: We note that this imbalance only arises in EL-Local as the number of received models per round in EL-Oracle is the same for all nodes.
It might indeed happen in EL-Local that some nodes have to process more incoming models during a round than others.
Yet, this concern can be addressed by having a node refuse an incoming model transfer when it has already received a particular number of models during a round, and having the sender node retry model exchange with another random node that is less occupied.
We will add these details in the camera-ready version of our work.
In larger networks, where nodes usually do not know about the participation of all other nodes, it is common to use a peer sampling service to randomly select other nodes [15]. A peer sampling service is a primitive that provides each node with an (almost) uniformly random subset of nodes, and this subset is periodically refreshed. Using an appropriate peer sampler will also help in balancing out the load between different nodes.
___
> Q: As the improvement is in the number of transient iterations, the test accuracy of the various methods may get closer with large $T$, which contradicts figure 4.
Re: Less number of "transient iterations" does not merely imply that an algorithm is faster only during the initial stages of learning. Instead, it indicates that the algorithm has a smaller second error term, which in turn causes the algorithm to perform better during the learning procedure. Please see our explanation regarding "Transient Iterations" in the global response at the top of the page.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. My questions regarding the application scenario and the empirical improvement are settled, while the restriction on $k$ or random selection still seems a limitation of this work.
As my concerns are partially settled, I will raise my score to 5. | Summary: This paper considers Epidemic Learning, a framework for distributed optimization where each node in a network pushes gradient-descent updates to a uniform random subset of $s$ nodes in the network. Theoretical bounds on the rate of convergence are derived as well as the number of ``transient iterations," showing that this scheme improves upon existing fixed and randomized topologies. These results are supplemented by some empirical evaluations that show gains in communication and iterations to convergence compared to complete or fixed sparse topologies.
Strengths: The technical results generalize existing convergence rates of complete graphs (i.e. centralized SGD). The analysis is fairly clean and logical, and the paper is fairly well-written. This work also provides some experimental components to support their findings.
Weaknesses: The practical benefits of this kind of decentralized optimization could be better contextualized (see below). The analysis, while clean, does not seem particularly conceptually surprising.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ---As I am not really an expert on these kinds of methods, I'd defer to the other reviewers as to the novelty of this work's approach. I'd be happy to revise my score in light of any such discussions.
---Is there a succinct summary of where the convergence benefits arise compared to, say, fast mixing but fixed network topologies? Intuitively, it seems to be because the convergence is dominated by the second-largest eigenspace, while random averaging avoids any particular bad eigenspace in expectation.
---To elaborate on the above ``Weaknesses," one aspect that was not clear to me (and perhaps could be elaborated more) is the relevant tradeoffs in implementing such a decentralized scheme. More concretely, in what scenarios should this kind of randomized scheme that nonetheless must permit communication between any pair of nodes be employed? If the communication architecture is itself a limiting bottleneck, then this scheme cannot work due to the need for pairwise communications. Is the proposed use case meant primarily as a way to speed up/reduce optimization problems that already employ complete topologies?
---While I am not a systems expert either, are there other tradeoffs one must consider with respect to routing/scheduling these randomized communications? For instance, it seems fairly intuitive how to easily route messages in, say, a ring topology --- when each node now must route to $s$ random nodes and received messages from $\Omega(s)$ random nodes, is there any ``scheduling'' overhead in practice?
---While I did not get the chance to carefully check the analysis in the Supplementary Material, the analysis seemed quite clean and logical, i.e. write out the various expected quantities in the natural way and derive relatively standard-looking GD-type recurrences. Conversely, this could mean that the analysis itself may not lead to new technical insights for others in the community.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide a detailed rebuttal below. Given that the reviewer has such a good intuition about the problem and our solution, we are a bit surprised by the low confidence score of the review.
___
> Q: summary of where the convergence benefits arise ...
Re: The intuition of the reviewer is indeed correct. The convergence behavior of decentralized learning schemes is indeed closely related to the second eigenvalue, or spectral gap, of the mixing matrix which governs the speed of mixing. In our approach, with random communication and under the same communication budget compared to a static topology, we can obtain very fast mixing properties. Please see our response to your last question for a detailed discussion of the theoretical significance of our work.
___
> Q: in what scenarios should this kind of randomized scheme that nonetheless must permit communication between any pair of nodes be employed?
Re: Thank you for your insightful comment.
You correctly identified that the implementation of our decentralized scheme is built around the condition that all nodes can communicate with each other.
This is similar to the assumptions of the EquiTopo topologies [51], a competitor baseline.
However, we argue that the connectivity requirement is a bit more lenient, allowing our model to function in a wide range of practical scenarios.
In data center settings it is common to train on clusters of highly interconnected GPUs, and all-to-all communication should be achievable in these settings.
In edge settings, e.g., a network of mobile devices collaboratively training a model while keeping private datasets, the communication barrier might appear more substantial.
Nonetheless, Internet networks are generally well-connected, which mitigates this concern.
More importantly, from a practical point of view, even if pairwise communications encounter some barriers, the decentralized and randomized nature of EL-Oracle and EL-Local should still allow for effective model learning and convergence.
The occasional lack of communication between specific nodes should not significantly impact the algorithm's performance, as model updates are still propagated through other communicating nodes, as long as the network is not partitioned.
EL is most useful in scenarios where every pair of nodes can communicate, but the total communication budget is limited.
Our randomized communication scheme allows for efficient use of the limited resources while ensuring faster model convergence than conventional decentralized learning approaches.
We will incorporate these explanations into the camera-ready version of the paper to clarify the assumptions and practical implications of our work.
___
> Q: ``scheduling'' overhead in practice?
Re: The topology construction in EL-Oracle and the peer sampler in EL-Local might introduce some communication and computation overhead, but this overhead is absolutely minimal compared to the resources used for model exchange and training.
___
> Q: technical insights of the analysis for others in the community:
Re:
We are happy that the reviewer finds our analysis clean and logical.
To demonstrate the superior convergence of our EL approach, our analysis proceeds in two main steps.
1) The first step is established in Lemma 1, where we demonstrate that the randomized communication within EL achieves the property of fast mixing, with a mixing coefficient ($\alpha_s$ or $\beta_s$) of $\mathcal{O}(\frac{1}{s})$.
2) The second step was tightening the conventional analysis of decentralized learning. Let $\beta = 1 - p \in [0,1)$ be the mixing coefficient (a.k.a. consensus rate [23]). Existing analyses of decentralized learning propose a convergence rate (as per Equation (3)) of:
$$\mathcal{O}\left(\sqrt{\frac{L \Delta_0 \sigma^2}{n T}}+\sqrt[3]{\frac{L^2 \Delta_0^2\sigma^2}{(1-\beta) T^2}}+\sqrt[3]{\frac{L^2 \Delta_0^2\mathcal{H}^2}{(1-\beta)^2 T^2}}+\frac{L \Delta_0}{(1-\beta) T}\right).$$
Here, $\beta$'s impact on the convergence rate only appears in terms formatted as $\mathcal{O}\left(\frac{1}{(1-\beta)^k}\right)$. Therefore, as long as $\frac{1}{1-\beta} \in \mathcal{O}(1)$, diminishing $\beta$ does not affect the order of the convergence rate. Hence, e.g., $\beta = \frac{1}{2}$ and $\beta = \frac{1}{100}$ would have identical convergence rates (with both yielding $\frac{1}{1-\beta} \in \mathcal{O}(1)$ and the exact value of $\frac{1}{1-\beta}$ varying by a factor of less than 2). This suggests that we cannot achieve a convergence rate better than the existing state-of-the-art solutions with $\frac{1}{1-\beta} \in \mathcal{O}(1)$. However, we observed that this limitation arises from a looseness in the conventional analysis. More specifically, the second and third terms in the above expression can be tightened such that they go to zero as $\beta$ approaches zero (see Theorem 1, and the discussion below the theorem). This adjustment allows us to showcase the benefits of EL, which maintains a $\beta$ in $\mathcal{O}(\frac{1}{s})$.
We do agree with the reviewer that parts of our analysis are similar to the standard analysis for SGD (e.g., a recursion on the function value). However, we believe the same concern applies to most of the previous works as well. After all, these methods are essentially variations of SGD with additional elements that introduce new errors, and the distinctions usually lie in the bounds on these additional error terms, as we explained above.
---
Rebuttal Comment 1.1:
Comment: Thanks much for the response (and sorry for the delay)! I think that including some of this discussion on intuition (i.e. where the gains come from) as well as the intended settings where this learning would be more practical would be very useful. I don't have any further questions at this time.
Thanks again! | Summary: This paper proposes a decentralized learning algorithm in which each node
updates its model from a set of s random nodes in a system with n > s
nodes. The authors provide a theoretical analysis of the convergence speed
and the number of transient iterations, i.e., the number of rounds required
to reach linear speedup stage. Experiments are performed for the CIFAR-10
dataset comparing the proposed two methods EL-Oracle and EL-Local with a
number of baselines that are static topologies. The comparison metrics are
accuracy, test loss, and communication volume with increasing number of
communication rounds.
Strengths: A simple yet effective solution for decentralized learning. It is easily
implementable and can be easily adopted for any decentralized learning
task.
The theoretical analysis has some novel approach and is quite technical.
The paper is generally well written.
Weaknesses: Title: Epidemic learning is a very confusing title. The reviewer is of the
opinion that it is an overloaded term. For example, there are many
inference problems in the context of contagions (like infectious diseases)
where the objective is to estimate information about an outbreak. Also, the
reviewer is of the opinion that this title might fail to gain the attention
of readers from the decentralized learning community. Thirdly, it is very
short and non-informative.
Theorem 1 mentions step size \gamma, which does not feature in any
expressions that appear in the statement. The expressions for \gamma in
equations (4) and (5) should be included in the statement.
Remark 1: The authors mention that they "provide convergence rate directly
for the local models". But wouldn't the expected convergence rate of the
entire global averaged model depend on the maximum of (worst case) the
convergence rates among the local models? This is not addressed.
Experiments section is weak: All the inferences are based on a single
learning problem on a single dataset. The gain in accuracy is not
significant, but again the reviewer agrees that there are benefits such as
reduction in communication rounds and volume. Yet, the gain in accuracy is
low enough to wonder what would happen if the same was used for, say a
classification task on ImageNet.
The performance for varying s: The authors only use s=7. They mention that
this is consistent with baselines. However, they could have performed a
separate experiment to study how their method performs under different
metrics for varying s. Suppose, they get very good performance for s=5
itself, wouldn't it be an interesting statement?
Disconnect between theory and experiments:
1. The authors do not make an attempt to connect theoretical results to
experiments. For example, they mention that the number of transient
iterations is an inverse function of s. There are no experiments (like the
one suggested above) to test the tightness of the theoretical bounds. A
table analogous to Table 1 for experiments would be very helpful.
2. Step size: From a theoretical perspective, the step size \gamma in
equations (4) and (5) depends on s. However, in the experiments (Appendix
D), it is obtained by only running the fully-connected topology. Some
analysis of how convergence depends on s and \gamma (as a function of
s) would be useful.
7-regular static: The authors consider only one instance of 7-regular
topology. There are many possibilities for a 96-node graph. The authors
could have considered 5-10 instances of 7-regular graphs and provided
results averaged over these topologies.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: There are several doubts raised in the Weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do not bring up any limitations. No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide a detailed rebuttal below.
___
> Q: Epidemic learning is not a good and informative title.
Re: We can change the title to add some words about the novel element of our approach. For example, we consider changing the title to: "Epidemic Learning: Boosting Decentralized Learning with Randomized Communication".
___
> Q: Theorem 1 mentions step size $\gamma$, which does not feature in any expressions that appear in the statement. The expressions for $\gamma$ in equations (4) and (5) should be included in the statement.
Re: Thank you for the suggestion. We will move Equations (4) and (5) to the main paper, in the camera-ready version.
___
> Q: Remark 1: The authors mention that they "provide convergence rate directly for the local models". But wouldn't the expected convergence rate of the entire global averaged model depend on the maximum of (worst case) the convergence rates among the local models? This is not addressed.
Re: Note that exactly the same convergence rate as Theorem 1 holds for the global averaged model as well. In fact, proving the convergence guarantee directly on the local models requires one additional step in our analysis (see Footnote 2 on page 16 in the supplementary material). However,
we deliberately decided to provide the convergence guarantee directly on the local models as in practice the nodes may not have access to the global averaged model. We will clarify this in the camera-ready version of the paper.
___
> Q: All the inferences are based on a single learning problem on a single dataset. The gain in accuracy is not significant. What would happen if the same was used for, say a classification task on ImageNet.
Re: We have selected the CIFAR-10 dataset for our evaluation as it is one of the most common and representative datasets in this field. Due to time and resource constraints, we are unable to run a compute-intensive dataset such as ImageNet, especially in a decentralized setting that demands significantly more resources compared to centralized learning approaches.
The accuracy gain of EL on the CIFAR-10 dataset compared to the baseline is around 2\% which may or may not be considered significant.
However, note that EL is designed to be an efficient decentralized learning algorithm, which is shown by the $1.5\times$ reduction in communication cost. Our experimental results support the theoretical foundations that EL converges faster than the baselines.
___
> Q: The performance for varying s.
Re:
In response to the reviewer, we have conducted this experiment.
We show in Figure 1 in the attached PDF of the Author Rebuttal the test accuracy for EL-Oracle and EL-Local, for $ s = 4 $, $ s = 7$ and $ s = 14 $, and when using a 7-regular static topology.
We observe that increasing $ s $ increases the convergence speed since more models are exchanged, and results in higher test accuracy when the experiment ends.
___
> Q: The number of transient iterations is an inverse function of s, but there are no experiments that test this.
Re: Less number of "transient iterations" indicates that the algorithm has a smaller second error term, which in turn causes the algorithm to perform better during the learning procedure. This is empirically confirmed in Figure 4. Please see our explanation regarding "Transient Iterations" in the global response at the top of the page for a more detailed discussion.
___
> Q: Step size: From a theoretical perspective, the step size $\gamma$ in equations (4) and (5) depends on s. However, in the experiments (Appendix D), it is obtained by only running the fully-connected topology. Some analysis of how convergence depends on s and $\gamma$ (as a function of s) would be useful.
Re: Our theoretical result should be interpreted as an existence result (on the step-size).
In other words, our theory shows that there exists a step size for which the error is bounded by the expression given in the theorem, but it cannot be used to find the exact value of the step-size for the algorithm.
Note, however, that this kind of existence result is common and constitutes a large part of the optimization literature [23]. In practice, the value of the step size should be found by hyperparameter tuning through a grid search.
We originally performed a grid search in a fully-connected topology and used the same learning rates for other experiments, while sanity checking the performance with respect to existing works as doing a grid search for each experiment at a scale of 96 nodes is computationally intensive and we did not want to compromise on the scale.
To address the concern of the reviewer, however, we conducted a grid search for all the baselines in the rebuttal period, and we provide its results in the attached PDF. While we do see some minor differences in the performance for different step sizes, our main conclusion remains the same.
Regarding the theoretical dependence of the convergence rate on $\gamma$, note that in Theorem 1, the step-size $\gamma$ is already fixed with the optimal value given in Equations (4), and (5). Also, the dependence of the rate on $s$ is explained in Section 3.2. In short, the second error term in Theorem 1 vanishes with $s$ with the rate $\mathcal{O}(\frac{1}{\sqrt[3]{s}})$.
___
> Q: 7-regular static: The authors consider only one instance of 7-regular topology. There are many possibilities for a 96-node graph. The authors could have considered 5-10 instances of 7-regular graphs and provided results averaged over these topologies.
Re: We run each experiment five times, and each run uses a different initialization seed.
This seed also influences the generation of the 7-regular static topology.
Therefore, we generated 5 unique instances of a 7-regular graph and presented the averaged results.
We thank the reviewer for pointing this out and will clarify this in the experimental setup in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: The proposed new title looks good to me.
I still feel that the proposed method should have been evaluated on more datasets, not necessarily ImageNet. Each dataset presents its own challenge. It would be good to know if randomization consistently performs across different datasets and differet learning tasks. For example, Reference [31] use in addition to CIFAR10, a public NLP dataset as well.
I appreciate the new set of experiments on sensitivity to s. What is the cost of increasing s? It leads to more communication, and hence, it must be slower. Is this correct?
I am satisfied with the "7-regular static" answer.
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that the reviewer agrees with the proposed title, and we thank the reviewer for their comments about our new experiments.
**Regarding more datasets:** We agree with the reviewer that evaluating our method on different tasks and datasets would be valuable. However, we emphasize that our main contribution is theoretical, obtaining a convergence rate superior to state-of-the-art solutions. To validate our theoretical findings, we provide a proof-of-concept on the CIFAR-10 dataset, a standard dataset in the literature.
However, we will conduct additional experiments for the camera-ready version to address the reviewers’ concerns. For example, the datasets included in the LEAF benchmark would be suitable for these experiments. These datasets have a natural non-IID data partitioning and are more challenging than CIFAR-10. Furthermore, the resource and compute requirements for these experiments are within our compute budget, and we see them occasionally reported in other works. We plan to conduct these experiments for the camera-ready version.
**Regarding the effect of $s$:** The reviewer is correct that increasing s directly increases the communication volume. Whether this leads to an overall slowdown depends on the network of the environment where our algorithm is deployed. In data center settings where network links usually have high capacities, one can employ a high value of s. In edge settings with limited network capacities, the value of s should likely be smaller to avoid network congestion. This flexibility is, in fact, a distinguishing aspect of our approach. We will add this discussion to the camera-ready version. | Rebuttal 1:
Rebuttal: Firstly, we would like to thank all reviewers for the thorough and insightful comments on our submission. We appreciate the detailed feedback and the points raised, which have offered some valuable new insights.
Below we address two comments that were raised by multiple reviewers.
## Transient Iterations
As multiple reviewers raised concerns regarding the concept of "transient iterations" and its connection to our experimental results, we provide here a clarification:
To clarify, in the context of decentralized learning schemes, the convergence analysis often involves three error terms (as in Theorem 1). Generally, the first error term $\mathcal{O}\left(\frac{1}{\sqrt{nT}}\right)$ is attributed to the stochastic noise in gradient computation during the SGD step. The last error term $\mathcal{O}\left(\frac{1}{T}\right)$ is caused by the initial error in the function value. Notably, these two error terms are generally identical for different schemes since they are not directly related to decentralization. In fact, they even exist when running a centralized SGD on a single node.
In contrast, The second error term $\mathcal{O}\left(\frac{1}{T^{2/3}}\right)$ is the additional error introduced by decentralization and arises due to model drift: different nodes computing gradients on different local models at each step. This term is directly influenced by the efficiency of the mixing within the communication network. Hence, this term is the primary term to consider while comparing different decentralized learning schemes. The first term, despite having a stronger dependence on $T$, does not influence the comparison, as it is always fixed regardless of the communication network, be it say a sparse ring or a complete graph (see Equation (3) in the paper). Also, usually, the number of iterations required for the first term to become dominant is very large and it could even be more than the total number of learning iterations in a practical DL setting.
To capture this phenomenon and facilitate a more straightforward comparison of the second error term, the concept of "transient iterations" has been proposed in previous works. This concept does not merely imply that an algorithm is faster only during the initial stages of learning. Instead, it indicates that the algorithm has a smaller second error term, which in turn causes the algorithm to perform better during the learning procedure.
Considering this perspective, Figure 4 does not contradict the theoretical statement about the superiority of convergence in transient iterations. Instead, it suggests that EL should have a more favorable second error term, resulting in its out-performance throughout the learning process that matches our theoretical findings.
## Additional Experiments
As requested by multiple reviewers, we have added two additional experiments which results can be found in the attached PDF below. Our first additional experiment explores the effect of different values of $ s $, i.e., the number of neighbors we send a model to. Our second additional experiment involves tuning of the step-size $\gamma$. We hope this addresses the concerns of the reviewers.
Pdf: /pdf/4c9d64c3a344bc87c4ea4f575a289630c7c45771.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper explores decentralized learning algorithms with the aim of faster model convergence while comparable accuracy compared with conventional DL methods. The new proposed algorithm - epidemic learning (EL) - leverages a dynamically changing, randomized communication topology to train a machine learning model in DL environment. The paper provides theoretical analysis which shows that the EL algorithm surpasses the best-known static and randomized topologies in terms of convergence speed, w.r.t. two key properties: linear speed-up and transient iterations. The experimental results show that the proposed EL-oracle and EL-local achieve quicker convergence than baselines.
Strengths: This paper is studying a very interesting and important problem - DL algorithms. The paper is well written, which is easy to follow. The proposed EL algorithm is technically sound. The theoretical analysis and experimental results show its effectiveness compared to baselines.
Weaknesses: A major concern is that the proposed EL algorithm is not significantly different with semi-dynamic and time-varying and randomized topologies (they are introduced in related work section). Especially, gossip learning (GL) is like a special case of EL-local with s = 1.
A minor comment: Figure 3 should be improved by giving label for x-axis. Also there is no full words for the first abbr "CDF".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Compared with semi-dynamic topologies, and time-varying and randomized topologies, what's the signaficance of EL? Have any of the previous works provided theoretical analysis on convergence rate?
2. What does "the convergence of GL on non-convex problems remains uncertain" mean?
3. Is GL a special case of EL-local with s=1? In the experiment, have you done sensitivity analysis on s values?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and provide a detailed rebuttal below.
___
> Q: A major concern is that the proposed EL algorithm is not significantly different with semi-dynamic and time-varying and randomized topologies (they are introduced in the related work section). Especially, gossip learning (GL).
Re: We agree with the reviewer that the idea of running SGD over time-varying and randomized topologies existed in some previous works, including GL (as we outlined in the related work section).
However, our work is the first, to our knowledge, that provides a theoretical foundation highlighting the advantages of randomized communication for decentralized learning.
In fact, our main contribution is to prove that an adaptation of SGD over these randomized networks can outperform the state-of-the-art decentralized learning schemes.
Hence, we believe the simplicity and effectiveness of our algorithm only enhance its practical relevance for the community.
___
> Q: What does "the convergence of GL on non-convex problems remains uncertain" mean?
Re: With this sentence, we mean that the convergence of GL on non-convex loss functions has not been theoretically proven yet in the works that introduce and evaluate GL [15-16]. In fact, these works are mainly empirical and do not provide any convergence guarantee.
___
> Q: Is GL a special case of EL-local with s=1?
Re: While at a high level, EL-Local with $s=1$ may look very similar to GL, there are some subtle differences, mainly in the way the aggregation proceeds.
GL is inspired by the gossip protocols [47], and involves two parallel steps.
At each time step, a node sends a message (a model in GL) to one randomly chosen node ($s=1$).
Independently, on receiving a single message, a node aggregates the received model from this message with its own model and updates the aggregated model.
In contrast, EL-Local is designed to be aligned with existing decentralized learning algorithms (D-PSGD), with some basic modifications.
EL-Local proceeds in rounds where first a node updates its local model on its local dataset and sends its model to $s$ randomly-chosen nodes.
Next, the node waits to receive the models from other nodes in the current round.
On receiving one or more models in the ongoing round, the node aggregates them all together.
Even if each node chooses exactly one random node to send the message to ($s=1$), it can receive messages from multiple nodes due to randomness.
In other words, with $s=1$, if a node receives multiple models in a round, nodes in GL will aggregate the received model with its local model for each received model separately, whereas in EL-Local, there will be a single model update per round, and all the received models from that round are aggregated together.
From a theoretical point of view, this subtle difference plays an important role as it ensures the local models of the nodes stay close to each other during the training (Lemma 2), which is essential for the proof of convergence.
In summary, while GL shares some high-level similarities with our approach, GL is not a special case of EL-Local under $ s=1 $.
___
> Q: In the experiment, have you done sensitivity analysis on s values?
Re: We have not done a sensitivity analysis on $ s $ for the original submission, but in response to the reviewer we have conducted this experiment in the rebuttal period.
We show in Figure 1 in the attached PDF in the Author Rebuttal the test accuracy for EL-Oracle and EL-Local, for $ s = 4 $, $ s = 7$, and $ s = 14 $, and when using a 7-regular static topology.
We observe that increasing $ s $ increases the convergence speed since more models are exchanged, and results in higher test accuracy when the experiment ends.
---
Rebuttal Comment 1.1:
Title: I have read the response
Comment: Thanks for the response to my question. Most of my questions have been answer except Question 3. What's the result if you compare EL-Local with s=1 and GL method?
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that most of the reviewer’s questions have been addressed.
As we answered Question 3 in the rebuttal, could the reviewer specify in which aspect they look for further explanation? We have explained why GL is not a special case of EL-Local with s=1 (first part of the question), and we have also conducted a sensitivity analysis on the values of s, which can be found in the attached PDF document (second part of the question).
The reviewer might be asking for additional experimental results, comparing GL's achieved test accuracy and communication efficiency with EL and other baselines. Despite the algorithmic similarities between EL-Local and GL, we have not included GL as an experimental baseline because GL operates asynchronously, whereas EL-Local is a synchronous algorithm. This difference complicates a fair and clean comparison. Specifically, the performance of GL highly depends on the underlying node and network characteristics, and it can be made arbitrarily bad if network links are slow and heterogeneous. Under these circumstances, the local models might drift away from each other. Therefore we did not include this result in Figure 4. _If the reviewer believes this result is valuable, we can consider adding this to the camera-ready version._
A similar argument exists on the theoretical side. There is no theoretical convergence rate for GL in prior work, and analyzing this algorithm requires additional assumption that prevents the local models from drifting away from each other. We consider this analysis beyond the scope of our paper. | null | null | null | null | null | null |
Expressivity-Preserving GNN Simulation | Accept (poster) | Summary: The paper deals with supervised machine learning with graphs, specifically with expressive GNNs, and how to implement them efficiently. Specifically, it investigates the expressive power of graph transformations to transform an input graph such that an ordinary 1-WL-equivalent message-passing GNN can simulate, e.g., k-WL-equivalent GNNs, in a layer-wise fashion (strong simulation). To that, the authors formally define this transformation and show that an MPNN can simulate many popular, more expressive GNNs, see Corollary 3.6.
The authors acknowledge that such a transformation is folklore knowledge.
Moreover, they investigate the regime (weak simulation) where the MPNN on the transformed graph needs more iterations to distinguish the same pairs of non-isomorphic graphs. Restricted to their particular definition of transformation, they show that some GNN layers cannot be strongly simulated but weakly simulated.
Empirically, they show that their transformations often beat standard message-passing GNNs regarding predictive performance on standard benchmarks. Somewhat surprisingly, they show that they sometimes even beat the architectures they are aiming to simulate.
Strengths: The paper is easy to read, and everything is formalized appropriately. The proofs seem to be correct, and the theoretical results are plausible.
To the best of my knowledge, Theorem 5.2 and Corollary 5.3 are new and somewhat interesting.
The experimental study is well conducted, and the presented results are somewhat interesting.
Weaknesses: Section 3 just contains more or less obvious results known within the graph learning community. (This is also clearly acknowledged by the authors in the introduction and the appendix.). However, with this in mind, it is unclear why Section 3 occupies 1/3 of the main paper.
It seems that the proof of Theorem 4.1 strongly exploits specific details of "Structure-to-graph encoding" (Def. 3.1), which makes the results somewhat narrow. The authors should be more clear about this.
The reasoning in lines 191 -- 195 is highly handwavey. You seem to implicitly assume that the function is continuous on a compact domain.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: Do you, to some extent, understand why the simulated architectures often lead to better predictive performance over their simulated counterpart?
Q2: Do you, to some extent, know if the simulation factor influences predictive performance? For example, does a high simulation factor lead to better or worse predictive performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: It seems that the proof of Theorem 4.1 strongly exploits specific details of "Structure-to-graph encoding" (Def. 3.1), which makes the results somewhat narrow. This should be clarified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
### Concerning the Weaknesses
>Section 3 just contains more or less obvious results known within the graph learning community. (This is also clearly acknowledged by the authors in the introduction and the appendix.). However, with this in mind, it is unclear why Section 3 occupies 1/3 of the main paper.
We agree, parts of the graph learning community know that some specific GNN architectures can be simulated (some of these GNNs are even implemented through an MPNN applied on a transformed graph, as discussed in our appendix).
However, formalizing and proving such informal knowledge is an added value for the GNN community. Additionally, we prove that is it possible to simulate further GNNs for which it was previously not know. In particular, the key results of Section 3 are:
(1) a formal definition of strong simulation (i.e., preserving expressivity in each iteration / layer),
(2) a general algorithm that yields the required graph transformations to strongly simulate GNNs that follow our definition of augmented message passing, and
(3) simulation results for 9 GNNs, 5 of which were not known to be simulatable before (and were originally implemented though non-standard message passing).
As far as we know, the formal definition of strong simulation is novel. Through this definition we generalize individual results into a general theoretical paradigm. Furthermore, while such results might be obvious to the original authors of a GNN, coming up with a corresponding graph transformations is non-trivial and previously has required proving their correctness for every single architecture. With (2) we streamline this approach: our algorithm yields a graph transformation (with guaranteed strong simulation) for any GNN that follows our definition of augmented message passing.
Remark on (3): skimming recent LoG and ICML paper, we found two more strongly simulatable GNNs that have not been known to be simulatable before and were implemented through non-standard message passing (see global response for more details). A thorough literature review would most likely yield many more strongly simulatable GNNs.
>It seems that the proof of Theorem 4.1 strongly exploits specific details of "Structure-to-graph encoding" (Def. 3.1), which makes the results somewhat narrow. The authors should be more clear about this.
To obtain a useful definition of simulation it is **necessary** to put restrictions on the transformations (eg a computationally expensive way to simulate any GNN is to encode the isomorphism class in any graph label).
Our definition of structure-to-graph encoding details these necessary restrictions.
We enforce two natural restrictions on structure-to-graph encoding and simulation. The first restriction limits the asymptotic runtime of the encoding to the runtime of a single iteration of the simulated algorithm. This avoids the case of precomputing isomorphism classes and running other expensive algorithms. However, this still allows to use the simulated algorithm for a large (but constant) number of iterations to produce features in the structure-to-graph encoding. The second restriction decouples the generation of the graph structure from the features of the original structure which avoids this issue. These two restrictions allow the simulation of many common GNN architectures and simultaneously forbid the previously discussed trivial cases.
We explain this partly in the discussion section of our paper but we will make it clearer in the camera ready version.
>The reasoning in lines 191 -- 195 is highly handwavy. You seem to implicitly assume that the function is continuous on a compact domain.
Thanks for spotting. We will fix this and add the required assumptions to the camera-ready version.
### Concerning the Questions
> Q1: Do you, to some extent, understand why the simulated architectures often lead to better predictive performance over their simulated counterpart?
As indicated in Table 2, GIN + a graph transformation beats the original GNN in 50-60% of experiments. This is exactly the expected behavior: our simulation performs on par with the original GNN; it's basically a fair coin flip, which of the two is better. We refer the remaining performance differences to randomness / noise in training and hyperparameter search.
> Q2: Do you, to some extent, know if the simulation factor influences predictive performance? For example, does a high simulation factor lead to better or worse predictive performance?
This is an interesting question for follow-up work as we have two opposing ideas about this. A higher simulation factor means that when an MPNN simulates a higher-order GNN it will need more layers to achieve the same expressivity. We hypothesize that this could lead to worse predictive performance as deeper GNNs are harder to train due to issues such as oversmoothing. However, there has been recent work [Bause 2022] [Azabou 2023] that argued that slowing down message passing (hence using more layers to perform one standard $1$-WL aggregation) can lead to better predictive results. As a simulation factor greater than 1 slows down message passing compared to the simulated architecture, this could also improve the predictive performance. Overall, more research into this direction is needed.
#
[Azabou 2023]: Azabou et al.; Half-Hop: A graph upsampling approach for slowing down message passing; ICML, 2023
[Bause 2022]: Bause and Kriege; Gradual Weisfeiler-Leman: Slow and Steady Wins the Race; Learning on Graphs Conference, 2022
---
Rebuttal Comment 1.1:
Title: Answers
Comment: Thank you for your answers. If the authors address the above-mentioned shortcomings, I am not against accepting the paper, especially since the other reviewers seem to like the paper. However, I still believe that the results are mostly implicitly known to the community. Hence, I will keep my current score. | Summary: The paper investigates the idea of simulating (replacing) non-standard message passing networks (MPNs) using simple standard MPNs by first applying a graph transformation (new nodes and edges). The paper provides a formal construction of graphs and their generalizations in the form of relational structures that allows for describing e.g. non-pair-wise interactions and cellular complexes. The step through relation structures is important to make arrive to the main theorem of the paper, which is that many non-standard MPNs (based on higher-order relational structures) can be simulated with standard MPNs if the relational structure is converted to a graph in an appropriate way. The theoretical results are underpinned with experiments, showing that indeed simple MPNs on transformed graphs perform on-par (or even outperform) advanced non-standard MPNs.
Strengths: * The paper is precise and clear and systematically builds up to the theorems. The various formal definitions provided along the way are helpful in getting a high level picture of the graph NN field. At the same time, I must admit, that the formal approach does require quite some attention from the reader as the paper is dense with information and intuition is sometimes hard to acquire.
* The experiments are effectively presented. Table 1 and 2 are interpretable and convincing.
* The paper provides a novel analysis
* The paper gives useful insights into the expressivity of various forms of graph NNs.
* The paper seems reproducible with a link to clean (anonymous) code in the supplementary materials, and the supplementary materials are otherwise very thorough as well.
Weaknesses: The main paper has few details for practitioners. E.g., how to convert a relational structure to a graph is not explained. Details are however provided in the supplementary materials. The paper could benefit from a running example, perhaps the CWN case (fig 1) in which the struct2graph conversion is intuitively explained, as well as why one can similar CWN on this new graph. An example is given at line 248, but feels a bit too abstract for me.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Although figure 1 was very helpful in getting the idea across. Figure 2 was less clear, mainly due to the notion of “nested aggregation” not being well explained. E.g. line 273 shows a color update function that depends on a nesting of sets, but how do I relate this to Figure 2? Also I read this as nested sets, but it is written “nested aggregation functions”, is $C_x$ an aggregation function then? I’m confused here as to what aggregation refers to, could you explain this?
Line 94 the notation $2^{(U^l)}$ was unknown to me so I had to google it. It means the space of all mappings/functions $a: U^l\rightarrow\{0,1\}$, right? Perhaps some detail/definition can be added.
Are there any general recommendations for how to derive the graph transform to simulate non-standard MPNNs, or is this a case by case analysis? E.g., it is still unclear to me how one should derive/define R.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Limitations are appropriately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and appreciate that the reviewer acknowledges that our paper is precise, clear, has convincing experiments, and provides a novel analysis of graph neural networks.
> At the same time, I must admit, that the formal approach does require quite some attention from the reader as the paper is dense with information and intuition is sometimes hard to acquire.
Thank you for pointing this out, we will add more examples to aid with intuition.
### Concerning the Weakness
> The main paper has few details for practitioners. E.g., how to convert a relational structure to a graph is not explained. Details are however provided in the supplementary materials. The paper could benefit from a running example, perhaps the CWN case (fig 1) in which the struct2graph conversion is intuitively explained, as well as why one can similar CWN on this new graph. An example is given at line 248, but feels a bit too abstract for me.
We provide some tips on how to transform structures to graphs in the proof sketch of Theorem 3.5. However, we agree that the main paper does not give enough explanations. We will add CWN as a running example.
### Concerning the Questions
> Although figure 1 was very helpful in getting the idea across. Figure 2 was less clear, mainly due to the notion of “nested aggregation” not being well explained. E.g. line 273 shows a color update function that depends on a nesting of sets, but how do I relate this to Figure 2? Also I read this as nested sets, but it is written “nested aggregation functions”, is $C_x$ an aggregation function then? I’m confused here as to what aggregation refers to, could you explain this?
In the paper we write _nested aggregation function_ but we think that the word function might be confusing and will thus change it to just _nested aggregation_. Furthermore, we will explain nested aggregations better in the final version of the paper. In the definition $C_x$ is meant to be an aggregation $C_x = \\{\\!\\{ c_y \\mid y \\in \\mathcal{N}(x) \\}\\!\\}$ (see line 273). A nested aggregation is an aggregation inside another aggregation. For example, $\\{\\!\\{ c_y^t \\mid y \\in \\mathcal{N}_2(x) \\}\\!\\}$ is a single aggregation and $\\{\\!\\{ \\{\\!\\{ c_y^t \\mid y \\in \\mathcal{N}_2(x) \\}\\!\\} \\mid x \\in \\mathcal{N}_1 (v) \\}\\!\\}$ is a a nested-aggregation. Consider the vertices in Figure 2 left, we refer to the top vertices as $a, b$ and the bottom vertices as $x, y$. Then Figure 2 is meant to be read as the blue vertex receives the colors $\\{\\!\\{ \\{\\!\\{ c_a^t, c_b^t \\}\\!\\}, \\{\\!\\{ c_x^t, c_y^t \\}\\!\\} \\}\\!\\}$.
> Line 94 the notation $2^{(U^l)}$ was unknown to me so I had to google it. It means the space of all mappings/functions $a: U^l \\to 0,1$, right? Perhaps some detail/definition can be added.
The _power set_ of a set $X$ contains all subsets $2^X=\\{Y \\mid Y \\subseteq X\\}$ of that set $X$. For a set $U$ and an integer $\\ell > 1$, the set $U^\\ell=\\{ (u_1, \\ldots, u_\\ell) \\mid u_1 \\in U, \\ldots, u_\\ell \\in U \\}$ is the set containing all tuples of length $\\ell$ built from elements of $U$. Thus, for a set of objects $U$, the set $2^{(U^\\ell)}$ consists of all possible sets built from $\\ell$-tuples of $U$. For example, $U = \\{x, y \\}$ and $\\ell = 2$. Then, $U^\\ell = \\{ (x,x), (x,y), (y,x), (y,y) \\}$ and $2^{(U^l)} = \\{\\emptyset, \\{ (x,x) \\}, \\{ (x,y) \\}, \\{ (y,x) \\}, \\{ (y,y) \\}, \\{ (x,x), (x,y) \\}, \\ldots, \\{(x,x), (x,y), (y,x), (y,y) \\} \\}$. Finally, the neighborhood function $\\mathcal{N}$ assigns one set in the domain $2^{(U^l)}$ to each element $u\\in U$. We will add some details about this to the paper.
> Are there any general recommendations for how to derive the graph transform to simulate non-standard MPNNs, or is this a case by case analysis? E.g., it is still unclear to me how one should derive/define R.
We provide an automated way of deriving the graph transformations with Algorithm 4 (strong simulation) and Algorithm 6 (weak simulation). We see this as one of the key strengths of our paper: it is no longer necessary to design the graph transformations by hand for every architecture and to prove their correctness. Instead, our algorithms automatically yield the required graph transformations to simulate the given non-standard MPNN. For strong simulation, the algorithm first creates a vertex for every object and then adds edges between two vertices if the corresponding two objects exchange messages in the simulated algorithm. The main difference for weak simulation are the cases with non-pairwise message passing and nested aggregations where we need to add dummy vertices (see proof sketch of Theorem 5.2).
---
Rebuttal Comment 1.1:
Title: Still in support of acceptance
Comment: Thank you for your reply, and sorry for the late reply (holidays...). I appreciate the clarifications and have confidence the (otherwise already nice) paper will be further improved with these minor adjustments.
I keep my score at 8 (strong accept) | Summary: The authors introduce methods for simulating certain graph neural networks (GNNs) using standard message-passing algorithms composed with graph transformations. To do so, the authors introduce a class of nonstandard message-passing algorithms they call "augmented message passing" (AMP) algorithms, demonstrate that many standard GNNs are composed of AMP layers, and explicitly construct graph transformations such that standard message passing on the transformed graph input is at least as expressive as a given AMP algorithm. The authors also discuss "weak simulation," and consider more specifically existing techniques that can be weakly simulated via the methods introduced by the authors. Finally, the authors numerically evaluate the performance of their introduced methods.
Strengths: The authors contribute a novel method for constructing message-passing neural networks (MPNNs) on transformed graph inputs that are at least as expressive as a wide class of GNNs with nonstandard message passing. This gives a standard way of viewing GNNs composed of AMP layers, which encompasses many state-of-the-art GNNs. This also formalizes previous work that considered graph transformations composed with MPNNs as a way to make MPNNs more expressive; simultaneously, it gives motivation for constructing GNNs that do not fit within this framework to construct more expressive GNNs This mapping between certain GNNs with nonstandard message passing and MPNNs acting on graph transformations also gives a potentially simpler framework for analyzing properties of such GNNs, though the authors are saving such study for future work.
Weaknesses: One weakness is that it is not immediately apparent what practical advantages the introduced simulation methods yield over direct implementations of the GNNs being simulated. The authors mention one advantage being that standard libraries for implementing MPNNs can be used once a given GNN architecture has been shown to be simulable via an MPNN acting on a graph transformation, but that is a relatively minor advantage.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I think this work is very nicely done, and only recommend some further exposition on the potential utility of the authors' work.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very encouraging and positive feedback!
On the reviewers remark:
> I think this work is very nicely done, and only recommend some further exposition on the potential utility of the authors' work.
We will provide further information on the utility of this work in the final version of this paper. Our work has three main utilities: (1) theory of expressivity, (2) design of new GNNs and (3) implementing GNNs with graph transformations.
**(1) theory of expressivity:** Simulation allows to investigate the expressivity of different GNNs through a unified lens by analyzing the corresponding graph transformation. For example, it should be possible to apply the results of Geerts and Reutter (2022) to obtain this result: Any GNN that can be weakly simulated whose graph-to-structure and structure-to-graph encoding can be written in a tensor language with $k+1$ indices is upper bounded in expressivity by $k$-WL. Similarly, it should be possible to apply the results of Morris et al (2023) to obtain VC bounds for any weakly simulated GNN.
**(2) design of new GNNs:** As noted by the reviewer, our theorems indicate that certain constructions (nested aggregations and non-pairwise message passing) **cannot** be strongly simulated and are thus fundamentally different from the message passing paradigm. Thus, to build GNNs that go beyond MPNNs in expressivity it seems promising to investigate such constructions.
**(3) implementing GNNs with graph transformations:** Instead of implementing a GNN with non-standard message passing it can be implemented as a graph transformation together with an MPNN. We see the main advantage of this method as it being easier to implement a graph transformation than non-standard message passing and that the resulting method is:
- Framework agnosticism: graphs can be stored independent from the used library for example as plain text.
- Ease of use: to use a GNN implemented via a graph transformation it is only necessary to apply the graph transformation to the data and use a standard MPNN. Thus, it is not necessary to implement a complex model, incorporate it into an existing codebase and get it to work.
This utility is also relevant to the only weakness raised by the reviewer:
> One weakness is that it is not immediately apparent what practical advantages the introduced simulation methods yield over direct implementations of the GNNs being simulated. The authors mention one advantage being that standard libraries for implementing MPNNs can be used once a given GNN architecture has been shown to be simulatable via an MPNN acting on a graph transformation, but that is a relatively minor advantage.
For many GNNs it is often not straightforward to use the publicly available implementations. Consider the examples of the GNNs we used in this paper: CWN and DSS.
CWN is compatible with PyTorch 1.7 and CUDA 10.2 [(source)](https://github.com/twitter-research/cwn) which is too old to be compatible with RTX 3080 GPUs as they are based on the Ampere architecture [(source)](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/) which requires CUDA 11 [(source)](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/) (and trying to get it to work with newer versions failed for us, we actually had to find old hardware to run the model).
DSS is implemented for PyTorch Geometric 1.7.2 which by now is almost 2 years out of date. In version 2.0.2 of PyTorch Geometric the behavior of attributes with `batch` in the name in PytorchGeometric changes (line 420 [here](https://github.com/pyg-team/pytorch_geometric/blob/2.0.2/torch_geometric/data/data.py)) compared to 1.7.2 (line 190 [here](https://github.com/pyg-team/pytorch_geometric/blob/1.7.2/torch_geometric/data/data.py#L178)) which causes issues with the `subgraph_batch` attribute in the implementation of DSS. This means that DSS requires a fix to work for newer versions of PyTorch. This shows that it is non-trivial to use existing implementations of GNNs with non-standard message passing. We believe that having a simplified way of running many GNNs would move the field forward by making it easier to use existing GNNs.
#
(Geerts and Reutter 2022): Floris Geerts and JL. Reutter; Expressiveness and Approximation Properties of Graph Neural Networks; ICLR, 2022
(Morris et al 2023): Morris Christopher, Martin Grohe, Jan Martin Tönshoff, and Floris Geerts; WL meets VC; ICML, 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. Including the above in the final version of the paper would address my concern. | Summary: The paper formally introduces the notions of MPNN simulating GNN with graph transformation. With the definitions, the authors further investigate which class of GNN can be simulated by MPNN.
Strengths: The work presents the first systematic theoretical investigation toward understanding which GNN can be simulated by MPNN.
Weaknesses: The simulation is based on expressiveness equivalence, so more empirical investigations may be required to understand the equivalence in practice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be nice to see more empirical investigations under the notions of simulations.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and appreciate that the reviewer acknowledges the `first systematic theoretical investigation toward understanding which GNN can be simulated by MPNN'. Concerning the weakness and question:
> (Weakness) The simulation is based on expressiveness equivalence, so more empirical investigations may be required to understand the equivalence in practice.
> (Question) It would be nice to see more empirical investigations under the notions of simulations.
Our experiments indicate that simulation achieves similar performance as the simulated algorithms. We compare simulation via MPNNs against 3 different non-standard message passing GNNs on 11 different datasets (see Tables 1, 2, 4, 5).
We agree that additional empirical investigations would further our understanding of GNNs and simulations in practice. While this is highly interesting, a fair comparison with a large number of higher-order GNNs requires that all higher-order GNNs are evaluated with a fair selection of hyper-parameters (and bounded run-time). This goes well beyond the scope of the current paper (we are not aware of any such study) and is an avenue for future work. Note that for transformations+simulation only, the fair comparison will be much easier as the same MPNN implementation and structure can be used. | Rebuttal 1:
Rebuttal: Dear reviewers, we thank you very much for your detailed comments and respond to your reviews individually below. We appreciate that you acknowledge the novelty, originality, the systematic theoretical investigation, and the well-conducted experiments.
Dear all, to further extend the generality of our approach we have taken a quick look at some recent papers and could prove that two additional GNNs can be strongly simulated: shortest path networks [Abboud et al., 2022] and generalized distance weisfeiler-leman [Zhang et al., 2023]. We have added a list of utilities that our work provides to the review of r6Zx and look forward to the discussion with you.
Best,
The authors of Expressivity-Preserving GNN Simulation
#
[Abboud et al., 2022]: Abboud et al.; Shortest Path Networks for Graph Property Prediction; Learning on Graphs Conference, 2022
[Zhang et al., 2023]: Zhang et al.; RETHINKING THE EXPRESSIVE POWER OF GNNS VIA
GRAPH BICONNECTIVITY; ICLR, 2023 | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work presents a simulation theory/method for efficiently approximating non-standard GNN functions via standard MPNNs plus graph transformers. It starts from the cases that can be strongly simulated and extends to weak simulation for a comprehensive conclusion. A simulation algorithm is also proposed and verified with experimental results.
Strengths: 1. The problem is new (especially focused on non-standard GNNs), clearly defined and rigorously investigated. The motivation is quite practical (if I understand correctly).
2. Experiments on graph classification tasks verify the effectiveness of the proposed algorithms, including performance on AUC, time cost, RMSE, etc.
Weaknesses: 1. The structure of the paragraphs dramatically reduces the readability.
2. See the questions below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: A. Major
1. Lines 21-22 should give more examples of 'non-standard' message passing and 'standard' MPNNs for better readability at the beginning. Better to use a table.
2. As I understand it, the core of the work is to transfer these 'non-standard' MP functions to MPNNs plus graph transformation functions. Why can these graph transformation functions not also be understood as MPNNs? If this is not the case, is this the essential contribution, rather than just giving them a different name?
3. It seems that strongly simulated GNNs are equivalent to higher-order WL architectures. For example, subgraph-based GNNs.
4. For the experiments, the most important aspect is the time cost, while Table 6 mixes the convergence epochs and the computational complexity. More statistics on FLOPs and convergence analysis are recommended.
B. Minor
1. Is the expressiveness of MPNNs in this paper about the 1-WL test? Or the authors should provide different explanations for the word "expressivity" each time it is used. For example, in line 503 of the appendix, should it read 1-WL test?
2. Some errors in citation notation, e.g., lines 42-45.
3. In line 94, the definition of the neighborhood function, should it be $\mathcal{N}_l$? Since $l$ is a parameter of the function, or two functions with different settings of $l(\mathcal{N})$ can never know. There may be some misunderstanding of line 95.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A,
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and appreciate that the reviewer acknowledges our rigorous investigation of a practically motivated novel problem.
The reviewer mentions that the "structure of paragraphs" reduces readability. Could you give us more details so we can fix it? Do you mean the inline equations and/or rather long paragraphs? We will improve readability in the camera-ready version using the extra page.
### Concerning the Questions
**Major**
> 1. Lines 21-22 should give more examples of 'non-standard' message passing and 'standard' MPNNs for better readability at the beginning. Better to use a table.
We will add more examples to this section. With standard message passing we refer to MPNNs - the most common type of GNNs, which aggregate messages over each node's neighborhood and expressivity bounded by $1$-WL (Morris et al., 2019, Xu et al., 2019). With non-standard message passing we mean any other GNN, typically with expressivity surpassing $1$-WL.
> 2. As I understand it, the core of the work is to transfer these 'non-standard' MP functions to MPNNs plus graph transformation functions. Why can these graph transformation functions not also be understood as MPNNs? If this is not the case, is this the essential contribution, rather than just giving them a different name?
On the one hand, graph transformations take graphs as input and output a modified graph. We use them as pre-processing. On the other hand MPNNs are differentiable functions that take graphs as input and output either node or graph embeddings (in Euclidean space). Furthermore, graph transformations cannot be seen as MPNNs: graph transformations can perform operations that are fundamentally impossible to do for MPNNs alone. For example, the transformation in Figure 1 transforms induced cycles in the graph to specially labeled vertices. As it is known (Chen et al., 2020) that MPNNs cannot detect induced cycles it follows that this transformation cannot be based on an MPNN.
> 3. It seems that strongly simulated GNNs are equivalent to higher-order WL architectures. For example, subgraph-based GNNs.
It depends on what we regard as "higher-order": $k$-WL specifically or more generally some variants of WL. Strongly simulatable GNNs do not necessarily correspond (as far as we know) to some $k$-WL test and might have expressivity incomparable to it. However, any strongly or weakly simulatable GNN can be loosely seen as a variant of WL, potentially increasing expressivity. In particular, replacing the final "function application (S4)" step in augmented passing with any injective function (i.e., the identity function) will result in a color update function, which can be viewed as a variant of WL. Hence, our work makes WL / MPNNs at least as expressive as any strongly simulatable GNN through graph transformations.
> 4. For the experiments, the most important aspect is the time cost, while Table 6 mixes the convergence epochs and the computational complexity. More statistics on FLOPs and convergence analysis are recommended.
Fair empirical convergence analysis in realistic settings is difficult as the different architectures often use different hyperparameters and performance dependence strongly on them. An extensive empirical comparison of time vs performance over different hyperparameters or tuning strategies goes well beyond the scope of this paper (we are not aware of any paper presenting such an analysis).
Having said that, we like the idea of adding more statistics like FLOPS to the final version of the paper and will do so as long as time permits.
**Minor**
> 1. Is the expressiveness of MPNNs in this paper about the 1-WL test? Or the authors should provide different explanations for the word "expressivity" each time it is used. For example, in line 503 of the appendix, should it read 1-WL test?
When talking about about expressivity (or expressiveness) we mean the ability of a GNN (or any function defined on graphs) to distinguish non-isomorphic graphs i.e. its ability to map different graphs to different embeddings. MPNNs are at most as expressive as the $1$-WL test and there are specific MPNNs (such as GIN) that are exactly as expressive as $1$-WL (Morris et al., 2019, Xu et al., 2019). Thus, when we compare the expressivity of some GNN to an MPNN we also compare it to the $1$-WL test. Line 503 reads: _This lead to the development of new GNNs that have higher expressivity than MPNNs._ Hence, line 503 equivalently also means: _This lead to the development of new GNNs that have higher expressivity than the $1$-WL graph isomorphism test._
> 2. Some errors in citation notation, e.g., lines 42-45.
Thank you, we will fix this.
> 3. In line 94, the definition of the neighborhood function, should it be $\mathcal{N}_\ell$? Since $\ell$ is a parameter of the function, or two functions with different settings of $\ell(\mathcal{N})$ can never know. There may be some misunderstanding of line 95.
We do not fully understand the question. Can you please explain more exactly what you mean? We agree that it might be useful to make the parameter $\ell$ explicit in $\mathcal{N}_\ell$. However, this would clash with with other notation we are already using. We will add further explanations to clarify the definition of neighborhood function.
#
(Morris et al., 2019): Morris et al.; Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks; AAAI, 2019
(Xu et al., 2019): Xu et al.; How Powerful are Graph Neural Networks?; ICLR, 2019
(Chen et al., 2020): Chen et al.; Can Graph Neural Networks Count Substructures?; NeurIPS, 2020
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal and the clarifications.
Comment: Some of my concerns were addressed. Here, I would like to clarify some questions:
The "structure of paragraphs" reduces readability:
-- The paper's organization in the intro and background sections can be optimized. In the intro, we want to understand some background, issues, and the motivation for the paper's work.
-- Readers want to see more than just a simple list of works in the related work. Instead, we should present an organized summary of existing work;
-- The contribution section should be more concise, clear, and organized.
The definition of the neighborhood function: Your understanding is correct, I still believe that $ℓ (\mathcal{N})$ is not a good representation.
---
Reply to Comment 1.1.1:
Comment: Thank you for the clarifications. We will take them into account for the final version of our paper. | Summary: The paper proposes a novel approach to simulate state-of-the-art graph neural networks (GNNs) using standard message passing. The authors introduce graph transformations that preserve the expressivity of GNNs and allow for better code optimization and competitive predictive performance on various molecular benchmark datasets. The paper presents two types of simulation, weak and strong, and evaluates their performance empirically.
Strengths: - The paper is highly original and presents a novel approach to simulating GNNs.
- The authors provide a thorough evaluation of the proposed method on 10 benchmark datasets.
- The paper is well-written and easy to follow, with clear explanations of the proposed method and its evaluation.
- The proofs in the supplementary are comprehensive. I am impressed by the detailed step-by-step illustrations
Weaknesses: Lack of Clarification.
- The intro is not logically coherent. For example. it lacks a formal definition of “simulation” before expanding on it (e.g. simulation is using graph transformations so that standard message passing can reach comparable performances as state-of-the-art graph neural networks? In the introduction, no formal definition of “simulation” is given before expanding on “strongly/weakly simulated” MP algorithms.
- Likewise, how do you define graph transformations and what are all possible types of graph transformations? Only until Section 3 were formal definition given
There are claims in the introduction that seem not grounded, which require justification and explanation:
- Line 19-20: This is due to their limited expressivity: for all MPNNs, there are 20 pairs of non-isomorphic graphs that get the same embedding
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - How sensitive is the performance of the proposed method to the choice of hyperparameters?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and appreciate that the reviewer acknowledges our paper novel approach, thorough evaluation, and highly original paper which is `well-written and easy to follow'. The reviewer raises only some lack of clarification as weakness which we address below.
### Concerning the Weaknesses
> Lack of Clarification.
> The intro is not logically coherent. For example. it lacks a formal definition of “simulation” before expanding on it (e.g. simulation is using graph transformations so that standard message passing can reach comparable performances as state-of-the-art graph neural networks? In the introduction, no formal definition of “simulation” is given before expanding on “strongly/weakly simulated” MP algorithms.
>Likewise, how do you define graph transformations and what are all possible types of graph transformations? Only until Section 3 were formal definition given
To address the points raised, we will provide some intuition and give a high-level definition of our central concepts simulation and graph transformation in the introduction.
> There are claims in the introduction that seem not grounded, which require justification and explanation:
> Line 19-20: This is due to their limited expressivity: for all MPNNs, there are pairs of non-isomorphic graphs that get the same embedding
We will clarify in the camera ready version that this follows from the fact the MPNNs are bounded by $1$-WL in their expressivity (Morris et al., 2019, Xu et al., 2019).
### Concerning the Question
> How sensitive is the performance of the proposed method to the choice of hyperparameters?
In our experiments our approaches (simulating other GNNs) were similarly sensitive to hyperparameters as the original GNNs. In particular the number of message passing layers and graph pooling operations are important to tune. | null | null | null | null |
A Unifying Perspective on Multi-Calibration: Game Dynamics for Multi-Objective Learning | Accept (poster) | Summary: This paper presents a novel approach to multicalibration by leveraging game dynamics and no-regret learning algorithms. The central idea is that multicalibration can be modeled as a multi-objective learning problem where an adversary and a learner play against each other, guided by no-regret dynamics.
The authors propose three types of dynamics: no-regret no-regret (NRNR), no-regret best-response (NRBR), and best-response no-regret (BRNR), each serving different purposes. These dynamics are used to establish the multicalibration algorithms that match or improve the fastest known sample complexity rates for multicalibration, provide deterministic multicalibrated predictors, and offer online multicalibration.
This work also extends the application of these dynamics to other fairness notions, specifically multi-group learning and multi-distribution learning. For the multi-group learning problem, the authors devise an optimal multi-group learning algorithm that relies on NRNR dynamics and is simpler than the existing approaches.
The authors validate their theoretical claims with empirical results on a few standard datasetst. The results highlight the importance of effective no-regret algorithms for better multicalibration, with the Optimistic Hedge outperforming others in the no-regret no-regret dynamics.
Overall, this paper contributes to the field by providing a unified, game-theoretic framework for multicalibration that unites disparate existing results and offers more efficient solutions, both in theory and practice. Moreover, it highlights the broader applicability of game dynamics and no-regret algorithms to other fairness notions in machine learning.
Strengths: The paper introduces a novel perspective by approaching multicalibration and other fairness notions using game dynamics and no-regret learning algorithms. It presents multicalibration as a multi-objective learning problem within a game-theoretic framework. However, while the application to multicalibration is new, the use of no-regret algorithms and game-theoretic models is well-established in other areas of calibration literature.
The paper offers a technically sound approach with rigorous mathematical derivations. The theorems are well-proved, and the proposed algorithms are clearly detailed. However, the empirical section could be expanded upon to strengthen the validation of the theoretical claims with real-world data.
The authors have done a reasonable job in articulating complex concepts and methodologies, with the paper being generally well-structured. The use of tables aids understanding, though more insights can be provided for the technical results.
The work has the potential to unify and extend prior work on multicalibration and other fairness notions. However, the impact of the work may depend heavily on how effectively these theoretical insights can be translated into practical applications. Furthermore, while the findings from the empirical results could guide the selection of no-regret algorithms in multicalibration tasks, the experimental validation is relatively limited and does not fully exploit the range of situations where these algorithms could be applied.
Weaknesses: The paper provides a theoretically strong development of multicalibration algorithms using game dynamics. However, there seems to be a gap between the theoretical development and the empirical results. The authors could improve this aspect by designing more comprehensive experiments that test a wide range of scenarios to validate their theoretical claims. These could include different types of datasets, varying levels of complexity, and possibly real-world use cases.
The experimental evaluation is currently limited to only a few standard but simple dataset. This may not sufficiently test the robustness of the proposed algorithms under different conditions. More experiments with diverse datasets would provide a better understanding of the algorithms' performance and potential limitations. Additionally, it would be beneficial to compare the proposed methods with more baseline or state-of-the-art algorithms for multicalibration to understand the relative performance.
Also, the paper could be improved by providing more detailed descriptions of the proposed algorithms. While the authors do discuss the high-level ideas behind using game dynamics for multicalibration, it may be beneficial for readers to have more specific details about how these algorithms are implemented.
Although the authors discuss several previous works, it is not entirely clear how their contributions improve upon or differ from these existing methods. Besides theoretical comparison, a clearer empirical comparison would make the authors' contributions more evident.
The paper could benefit from a more explicit discussion of the limitations of the proposed methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The paper provides an improved theoretical framework for multicalibration using game dynamics. However, it seems there's a disconnect between the theory and empirical results presented. Could the authors elaborate on why specific experiments were chosen and how they directly validate the theoretical developments?
The empirical evaluation mainly focuses on the standard datasets like UCI. How do the authors envision their proposed algorithms performing on other types of datasets, especially those with different characteristics, such as image datasets or high-dimensional problem? Could they provide any insights or theoretical guarantees on their algorithms' robustness across varied datasets?
The high-level descriptions of the proposed algorithms are appreciated, but having more concrete, step-by-step details of the algorithm implementations would be very helpful. Could the authors provide pseudocode or a more in-depth explanation of their proposed algorithms?
The paper could benefit from a more detailed comparison with previous works. How do the authors' contributions improve upon or differentiate from existing methods, specifically in terms of performance and applicability?
The paper presents a novel framework that generalizes previous works. How far can this generalization go? Are there any theoretical or practical constraints that limit its applications? Are there any lower bounds? Any insight into this would be beneficial for the understanding and application of the framework.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not provide a clear discussion of the limitations of their work or potential negative societal impacts, which is a critical part of any comprehensive research paper. This is an area where the paper could be improved. Here are a few suggestions for potential limitations and societal impact considerations:
Discuss the computational complexity and scalability of the proposed methods. This is crucial for real-world applications, particularly when dealing with large and complex datasets.
Given that the paper deals with fairness notions in learning, the authors could consider discussing the implications of misapplication of these methods. While the goal is to improve fairness, any shortcomings or misuse could unintentionally lead to bias or unfair outcomes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we respond to your questions.
**On providing Pseudocode**\
We already provide detailed step-by-step pseudocode for every algorithm in the Appendix: see Algorithms 2, 3, 4, 5, 6, 7. We also have fully released our source code for the experiments (see supplemental material).
On including experiments comparing to state-of-the-art algorithms in multicalibration
Our experiments already include comparisons to all existing and baseline multicalibration algorithms (see lines 1189-1191 of the appendix for further clarification). An important aspect of our framework is that it recovers all existing multicalibration algorithms. In the current presentation of Table 2, we refer to existing baseline algorithms by the name of the dynamics they take in our framework, e.g., the original multicalibration algorithm of HKRR18 is referred to as GD-ERM and the newer multicalibration algorithm that has become the community’s de facto standard (see e.g., KGZ19, DKRRY21) is referred to as Hedge-ERM. To clarify that these are indeed the baseline algorithms, we will add the appropriate citations on those row numbers in the final version of the paper.
**On the choice of experiment datasets**\
Our experiments, which evaluate on three datasets including both binary and multi-class problems, are among the most thorough in the multicalibration literature. Most prior work in this space is purely theoretical with no empirical evaluations. In cases where prior work in multicalibration has done empirical evaluations, those evaluations have only used toy simulated datasets (e.g. in GKSZ22). Indeed, our work includes the most extensive empirical evaluation of existing and introduced algorithms in multicalibration to date. This is in addition to the main contribution of our paper, which is to unify the theory of multicalibration.
**On the lack of deep learning results**\
Running deep learning experiments for high-dimensional image data sets is both beyond the scope of this work and of limited relevance to multicalibration. To the best of our knowledge multicalibration has never been successfully scaled to deep learning settings before (though we note multi-accuracy, a related but much easier problem, has). Moreover, UCI datasets present a more relevant empirical setting since they include demographic features that provide meaningful proxies for the definitions of groups in multicalibration.
**On misapplication of multicalibration as a fairness technique**\
Multicalibration has long been used as a method of algorithmic fairness and is an established line of work (see HKRR18). Overall, we are interested in improving the theory and design of algorithms for multicalibration that lead to better statistical and computational bounds. Note that in Section E.2 we do address some—to the best of our knowledge—previously overlooked shortcomings of multicalibration as a fairness technique and propose some solutions. However, a broad discussion of misapplication of statistical sample complexity bounds is not in the scope of this work.
**References**\
[HKRR18] U. Hebert-Johnson, M. P. Kim, O. Reingold, and G. N. Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In J. G. Dy and A. Krause, editors, Proceedings of the International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research, pages 1944–1953. PMLR, 2018\
[GKSZ22] P. Gopalan, M. P. Kim, M. Singhal, and S. Zhao. Low-degree multicalibration. In P.L. Loh and M. Raginsky, editors, Proceedings of the Conference on Learning Theory (COLT), pages 3193–3234. PMLR, 2022. | Summary: The paper provides a two-player dynamics framework that seeks to unify many strands of recent work on multicalibration and multiobjective optimization. With three possible setups considered: No regret against No regret, Best response against No regret, and Best response against Best response, efficient algorithms are provided to demonstrate that these generic setups apply in particular to (multi-)calibration-like constraints. The framework is then applied to several existing and new settings in the literature with the goal of improving some convergence bounds or, respectively, simplifying and streamlining the analysis of specific algorithms. Some experiments are given that show what happens when proposed dynamics are run with specific well-known no regret algorithms in the driver's seat.
Strengths: To me, the main strength of this paper is the rather clean nature of the setup, which promises that by combining, in any combination, best-responding and no-regret agents, it is possible (though not without further work) to recover existing multigroup fairness results. Previous guarantees are (with a couple exceptions) improved somewhat marginally, or greater simplicity of the framework is claimed in some other cases, but overall it is the generality of this framework --- showcased by its adaptability and ability to recover various insights about calibration (not just regret bounds, but also questions of the simplicity of the output calibrated model, amount of randomness required, etc) --- that is the paper's main forte.
Additionally, the conditional multicalibration setup presented in the paper (and generalizing previous conditional results in the quantile setting) looks like an interesting addition to the literature.
Weaknesses: No particular weaknesses, other than the relatively well-studied nature of, and relatively small gains in, some of the applications (but no big deal), as well as the somewhat terse and cramped presentation.
For instance, there are a few "intuitive-sounding" claims made about where NRNR, BRNR, BRBR may be applied to the greatest utility, but the actual applications are then scattered throughout the paper rather than recalled immediately (there is also Table 1, but I would appreciate a more intuitive dive into which dynamic was used where and why, at the point where these dynamics were actually introduced). For another example, I would be interested in further comments on potential alternative definitions of conditional multicalibration and a slightly expanded treatment of the comparison to the existing 1/sqrt(# people in group) type of guarantee.
Still, the paper is overall solid and well-done and I enjoyed reading it.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the above section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive review. We appreciate your feedback on including a discussion about which dynamics are used where and lead to which improvements—we will make some revisions to further improve the flow. We also appreciate your suggestion to include additional comments on our sqrt-conditional multicalibration result (Corollary E.4).
**On further details for conditional multicalibration**\
We would like to highlight the following points regarding Corollary E.4 to clarify its significance, as we believe it is an example of a significant theoretical improvement that our framework yields as a consequence of its versatility.
Although existing multicalibration algorithms only guarantee an error bound that—for each group $G$—scales inversely with the group’s probability mass $\Pr(x \in G)$, we show **it is possible to obtain an error bound that scales with $1 / \sqrt{\Pr(x \in G)}$ rather than $1 / \Pr(x \in G)$**. We are the first to obtain this stronger guarantee (Corollary E.4). We achieve this without introducing any additional assumptions and with no increase in the sample complexity (compared to the sample complexity of algorithms that do not use adaptive data analysis) or only a cube-root increase in sample complexity (compared to algorithms that use adaptive data analysis). Even more interestingly, this result is obtained by simply replacing first-order regret bounds with second-order regret bounds in our game dynamics. Please let us know if you have further questions and comments about these results.
Based on your feedback, we plan to add to the section some of these clarifying comments. We will also include refinement of Corollary E.4 which should be more readable, removes an extraneous $\sqrt{S’}$ factor from the sample complexity of Corollary E.4, and makes clear the corollary does not need to assume knowledge of $\Pr(x \in S)$ nor make any other assumptions.
---
Rebuttal Comment 1.1:
Title: Acknowledgment
Comment: Thank you for the reply. With the expectation that the flow of the paper will be improved --- especially with regards to the uses of and intuition on various combinations of no regret (NR) and best responding (BR) agents --- I will keep my current score. | Summary: This work exploits connections to game dynamics to propose a unifying algorithmic framework to address the multicalibration problem which has been recently used for tackling fairness concerns in machine learning. More precisely, based on the classic game dynamics approach used in learning problems, it is shown that multicalibration results can be seen as learning dynamics for two-player zero-sum games relying on no-regret algorithms or best response dynamics. Using this unifying framework, the paper recovers some guarantees for existing multicalibration algorithms with simplified proofs and results. Multi-objective learning guarantees are shown and new guarantees are also established for several multicalibration settings including in particular an exponential reduction in the complexity of k-class multicalibration over prior work. Experiments were conducted to evaluate the empirical performance of multicalibration algorithms on some real-world datasets.
Strengths: - The paper proposes a unifying approach using game dynamics which recovers prior results with simpler proofs and establishes novel guarantees improving over prior work in multiple settings.
- This paper is well-written, the presentation is very clear.
- While I am not familiar with the calibration/multicalibration literature and I did not go through the long appendix supporting the main part in details, the results are sound, quite rigorously exposed and the proofs which leverage for instance prior online learning results seem solid to the best of my knowledge.
- Several additional experimental results are also provided in the appendix to support the theoretical findings.
Weaknesses: Regarding novelty, the related work section mentions in l. 55-56 that ‘no work has established a broad connection between no-regret learning and multicalibration’. No-regret learning has been priorly used for calibrated forecasting as acknowledged by the paper. While it is mentioned that multicalibration ‘has very different challenges than calibrated forecasting’, the discussion regarding these is reduced to a single line l. 57-58 in the related work section. Given that one of the main contributions of this work is the unifying framework based on the connection with game dynamics, I would expect a more detailed discussion to further clarify the novelty with respect to prior work and calibrated forecasting regarding this particular aspect. For instance, some proofs such as the one of Theorem 3.8 follow similar lines to the proof of calibration as also mentioned in l. 246-247. Moreover, it seems that this connection to game dynamics was known for multi-objective learning and multicalibration is linked to it via the simple facts 2.5 and 2.6.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The game dynamics approach to multi-objective learning is a ‘common approach’ as mentioned in l. 138. The paper connects multicalibration to multi-objective learning in Facts 2.5 and 2.6 which leads to a connection between multicalibration and the game dynamics approach. Are Facts 2.5 and 2.6 novel results? Is there a particular reason for using ‘Facts’ for these results instead of ‘Lemma’ for instance? Is this related to the simplicity of the proofs in Appendix B once the losses are defined as in the facts?
- How do you justify the equality in l. 230-231?
- Section 3.1 is about multicalibration but section 3 is about multi-objective learning. Section 4 is 'Multicalibration with Game Dynamics'. Would it be more appropriate to move section 3.1 to section 4?
**Minor questions and suggestions:**
- Why is the baseline of (2) not also used in (1) so that the formulations are unified?
- The notations max and min are used for minimization over a hypothesis class or maximization over a class of loss functions and a set of data distributions. How are these guaranteed to be reached without further assumptions? Are the classes supposed to be finite or are the more general notations sup and inf more appropriate? For instance, cardinalities are used in Lemma 3.5 while the preliminaries do not seem to specify the nature of the classes.
- Minor suggestion: in Definition 2.2., you could keep the same notation as in Definition 2.1 and only change the dependence on $t$ for $D$ and $p$ to ease the reading (with the same order for the quantities and a single indicator function).
- l. 82: $\Delta(\mathcal{Y})^{\mathcal{X}}$ with parenthesis.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations of the results do not seem to be clearly discussed throughout the paper to the best of my knowledge.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below, we address your comments and questions.
**On game dynamics being a common approach for multi-objective learning.**\
While it appears intuitive that game dynamics must have a role to play in multicalibration, given that no-regret learning plays a role in calibrated forecasting, it has taken 5 years and 50+ papers in this space for our work to be the first to make this connection in a broad and unified way.
The nontrivial underlying challenge is not the mere reduction to min-max optimization—which is why we refer to Facts 2.5 and 2.6 as facts and not lemmas/theorems—but rather in solving the resulting equilibrium computation problem in a way that connects to practical algorithms. The needs of multicalibration (such as determinism, large and complex predictor space, etc.) differ significantly in this regard from earlier applications of general-purpose no-regret algorithms and game dynamics. Most notably, the minimizing player’s action set is the set of all predictors, which scales exponentially in the domain size $|X|$! We must therefore construct a novel and highly nontrivial online learning strategy for the minimizing player with a regret that is independent of the size of the action set and $|X|$. Moreover, this no-regret algorithm must use non-randomized predictors against an adaptive adversary (Theorem 3.7). Another example is that we need to obtain a deterministic solution from the game dynamics despite not having convexity, which required us to introduce a novel form of no-regret/best-response game dynamics (Lemma 3.4) that, to the best of our knowledge, has not been studied previously.
The key to our simple proofs are not Facts 2.5 and 2.6 but rather Theorems 3.7 and Lemma 3.4, and other lemmas in Section 3, that allow us to use different dynamics to address various specific needs and settings in multicalibration. None of these theoretical tools come from (or were even needed) in calibrated forecasting. Moreover, in section 3, we highlight how alternative game dynamics and benchmarks (such as notions of weak regret and deterministic solutions) can lead to equilibrium notions that are more in line with the needs of multicalibration. It is against this backdrop of alternative game dynamics that we introduce Theorem 3.7 that is the key to many of our results.
We agree that Theorem 3.8 is inspired by a similar lemma in calibrated forecasting, as we mentioned in the paper. But Theorem 3.8 goes further, providing a connection between online multicalibration and the calibrated forecasting literature that is new to our work. Indeed, we highlight that Theorem 3.8 is primarily used by our online multicalibration results.
**Equality on Line 230-231**
This equality is because we can choose $h^*$ so that $h^*(x) = z^*$. Intuitively, this equality arises because we are optimizing over all well-defined predictors $h: X\rightarrow Y$.
**The baseline of (2) vs baseline of (1)**\
The baseline of (2) is much larger than the baseline of (1). In fact, the baseline of (2) would be trivial to attain in an offline batch setting, whereas the baseline of (1) is impossible to attain in an online setting.
**On related work.**\
We appreciate your feedback and will amend the related works to emphasize that the technical tools our results rely on differ significantly from those explored in the calibrated forecasting literature. Below, we highlight some clarifying examples.
* Non-deterministic predictors are necessary in calibrated forecasting, whereas multicalibration is usually defined for deterministic predictors. We needed to design novel game dynamics that produce deterministic solutions since, as we note above, we could not rely on convexity to perform online-to-batch reductions. These determinism/non-determinism concerns do not arise at all in calibrated forecasting.
* The concept of calibrated forecasting does not require one to reason about a covariate distribution. In contrast, multicalibration is a form of supervised learning and is—at its asymptotic limit—exactly the task of learning a Bayes-optimal classifier. As a result, calibrated forecasting requires solving only a simple online learning problem over 2 actions—predicting 0 or predicting 1—while multicalibration requires online learning on an action set that grows exponentially with the size of one's domain $2^{|X|}$.
* Calibrated forecasting reduces to an online learning problem where sublinear regret is achievable [Fos99, Har22]. Multicalibration reduces to an online learning problem where sublinear regret is not always achievable. As a result, we had to introduce notions such as weak regret (Line 150).
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: I thank the authors for their clarifications regarding related work and concerning the challenges raised by multicalibration compared to calibrated forecasting. Most of my questions have been answered. I maintain my positive score. | Summary: The authors proposed a unified framework for multicalibration learning by exploiting its connection to the game dynamics in multi-objective learning. Strong theoretical guarantees were given and its extension to address group fairness was discussed.
Strengths: 1. The analysis of the game dynamics in multi-objective learning was novel and strong.
2. Connection between multicalibration learning and game dynamics in multi-objective learning was well exploited, unified framework was given.
2. The author gave a clear presentation of the key ideas of the work despite substantial material.
Weaknesses: 1. The experiment parts seem to compare with algorithms within the proposed framework. Is there any comparison with existing baseline algorithms?
2. How does the multi-objective learning discussed in the paper related to the Pareto optimal one?
3. Given different learner choices shown in the empirical section, is there any learner who fits the proposed framework better?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: see weakness.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your positive review. Below, we address your comments and questions.
**On experiment comparisons to baselines existing algorithms.**\
Our experiments already include comparisons to existing and baseline multicalibration algorithms (see lines 1189-1191 of the appendix for further clarification). An important aspect of our framework is that it recovers **all existing multicalibration algorithms**. In the current presentation of Table 2, we refer to existing baseline algorithms by the name of the dynamics they take in our framework, e.g., the original multicalibration algorithm of HKRR18 is referred to as GD-ERM and the newer multicalibration algorithm that has become the community’s de facto standard (see e.g., KGZ19, DKRRY21) is referred to as Hedge-ERM. To clarify that these are indeed the baseline algorithms, we will add the appropriate citations on those row numbers in the final version of the paper.
We note that our experiments show that using optimistic Hedge and no-regret/no-regret dynamics (as a concrete proposal for an algorithm in our framework) provides better empirical performance than both of these existing baselines.
**Is there a learner which empirically does better?**\
As discussed above, all of the algorithms discussed in the empirical section are derived from our framework, reflecting the generality of the framework. In terms of which specific algorithms are the most empirically successful, we find that playing optimistic hedge against optimistic hedge (Opt-Hedge Opt-Hedge NRNR) is consistently the most effective algorithm. We emphasize that this choice of algorithm had not been considered in the past, and it is through the unifying feature of our framework that its importance has come to light.
**On the relationship of multiobjective learning to pareto optimality.**\
The multi-objective learning discussed in the paper is concerned with attaining a min-max value rather than finding a strategy guaranteed to be on the Pareto front. We did not need to additionally consider the stronger condition of Pareto optimality in multiobjective optimization because the connection between multi-objective learning and multicalibration goes through the min-max value alone. We therefore do not include Pareto optimality in our definition of multiobjective learning. This is also consistent with recent works in multi-objective learning such as [HJZ22].
**References**\
[HKRR18] U. Hebert-Johnson, M. P. Kim, O. Reingold, and G. N. Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In J. G. Dy and A. Krause, editors, Proceedings of the International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research, pages 1944–1953. PMLR, 2018\
[HJZ22] N. Haghtalab, M. I. Jordan, and E. Zhao. On-demand sampling: Learning optimally from multiple distributions. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, 2022\
[KGZ19] M. P. Kim, A. Ghorbani, and J. Y. Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In V. Conitzer, G. K. Hadfield, and S. Vallor, editors, Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 247–254. ACM, 2019.\
[DKRRY21] C. Dwork, M. P. Kim, O. Reingold, G. N. Rothblum, and G. Yona. Outcome indistinguishability. In Proceedings of the Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 1095–1108. ACM, 2021. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides a unifying framework for the algorithm design and performance analysis of multicalibrated predictors.
In this paper, the multicalibraion problems is placed in the setting of multi-objective learning.
Under this interpretation, approaches based on game dynamics is proposed and analyzed.
It is shown that this approach yields improved performance guarantees.
Strengths: - Bounds are improved for a variety of problems.
- Experimental results support the effectiveness of the proposed method.
Weaknesses: - The approach based on game dynamics does not seem very surprising as it is a common approach to transform min-max optimization into a problem of finding an equilibrium solution by interpreting it as a zero-sum game.
- I have concerns about whether the definition is consistent with that of existing studies. (please refer to "Questions")
Minor comments:
- When citing existing results in Table 1, etc., it would be better to indicate the theorem number or the relevant section. I had a hard time checking the corresponding part.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In previous studies (e.g., [17,21,34]), multicalibrated predictors appear to be defined in terms of conditional expectation.
On the other hand, in Definition 2.1 of this paper, it appears to be defined in some sense by joint probabilities.
I believe this means that the errors bounded in 2.1 are smaller than those adopted as definitions in previous studies.
This raises concerns about whether comparisons with existing research bounds are valid.
I would appreciate an answer to this concern to see if there is a problem.
- Is there lower bounds that can be compared to the results obtained?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I have no concerns about the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we address your comments and questions.
**Is the game dynamics approach surprising?**\
We agree that writing min-max optimization problems as a zero-sum game is a common first step in many analyses. But it is the next steps that matter, and for multicalibration it has not been obvious what those next steps might be. Indeed, no prior multicalibration work has succeeded in using a game-theoretic framing as a unifying principle for multicalibration despite over 5 years (and 50+ papers) on the topic.
The nontrivial underlying challenge is not the mere reduction to min-max optimization—which is why we refer to Facts 2.5 and 2.6 as facts and not lemmas/theorems—but rather in solving the resulting equilibrium computation problem in a way that connects to practical algorithms. The needs of multicalibration (such as determinism, large and complex predictor space, etc.) differ significantly in this regard from earlier applications of general-purpose no-regret algorithms and game dynamics. Most notably, the minimizing player’s action set is the set of all predictors, which scales exponentially in the domain size $|X|$! We must therefore construct a novel and highly nontrivial online learning strategy for the minimizing player with a regret that is independent of the size of the action set and $|X|$. Moreover, this no-regret algorithm must use non-randomized predictors against an adaptive adversary (Theorem 3.7). Another example is that we need to obtain a deterministic solution from the game dynamics despite not having convexity, which required us to introduce a novel form of no-regret/best-response game dynamics (Lemma 3.4) that, to the best of our knowledge, has not been studied previously.
Overall we think that it is quite surprising that *every* known multicalibration algorithm and guarantee can be cleanly recovered—and improved upon—with game-theoretic learning dynamics using our framework. The generality of our framework and strength of Theorems 3.7 and Lemma 3.4 are further evidence that the progress made by our work is surprising, novel, and has the potential to significantly change the landscape of research in multicalibration. We will make sure that the final version of our paper emphasizes this generality from the outset and more clearly places in relief the challenges of applying learning dynamics to multicalibration.
**Defining multicalibration with conditional expectations versus joint probabilities.**\
Thank you for raising this question. Due to the inconsistent definitions of multicalibration employed in the multicalibration community, this has been an ongoing point of confusion.
We define multicalibration using joint probabilities in the same way as other recent literature (see GHKRS23, DLLT23), where the error tolerance for a group $G$ is inversely proportional to the group’s probability mass $\Pr(x \in G)$. Some early multicalibration works (such as HKRR18) define multicalibration with a conditional expectation—where the error tolerance for a group is independent of the group’s probability mass—**but make an assumption that all groups have a probability mass of at least some constant fraction** (see, e.g., Theorem 3.7 in HKRR18 which states $|S| \geq \gamma N$ ). That assumption means that their definition is actually weaker than our more direct joint-probability definition that is now the norm in the multicalibration community. Please let us know if this clarifies your question and we will add a footnote clarifying this in the final version as well.
We also want to bring to your attention that one of our results, Corollary E.4, shows that a guarantee even stronger than those of prior works is possible when the error tolerance for a group $G$ scales with $1 / \sqrt{\Pr(x \in G)}$ rather than $1 / \Pr(x \in G)$. We are able to obtain this novel guarantee by plugging in second-order regret bounds into our game dynamics framework.
**Lower bounds**\
We are not aware of any (nontrivial) lower bounds for multicalibration. We want to emphasize that our work has improved several upper bounds in multicalibration (see Table 1), suggesting the community’s known results are likely not tight.
[HKRR18] U. Hebert-Johnson, M. P. Kim, O. Reingold, and G. N. Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In J. G. Dy and A. Krause, editors, Proceedings of the International Conference on Machine Learning (ICML), Proceedings of Machine Learning Research, pages 1944–1953. PMLR, 2018\
[GHKR23] Globus-Harris I, Harrison D, Kearns M, Roth A, Sorrell J. Multicalibration as boosting for regression. arXiv preprint arXiv:2301.13767. 2023 Jan 31.\
[DLLT23] C. Dwork, D. Lee, H. Lin, and P. Tankala. New insights into multicalibration, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your thoughtful reply.
All my concerns have been addressed and I have no additional questions.
I think I understand the discussion about the definition of multicalibration thanks to the responses.
I would like to determine the final score after reading the opinions of other reviewers. | null | null | null | null | null | null |
Emergent Correspondence from Image Diffusion | Accept (poster) | Summary: The authors find that intra and cross-category correspondences are implicitly learnt by diffusion models trained self-supervised on large datasets. The paper proposes an approach to extract this knowledge as features from pre-trained Unet-based diffusion models. In particular, to compute the features of a particular image, noise is added to the image to simulate the diffusion process, and then input to the pre-trained diffusion model. The intermediate layer activations from the Unet at a particular timestep are used as features. They showcase the performance of these features on semantic-matching, outperforming other unsupervised baselines and strongly-supervised approaches specifically designed for semantic matching. They also show that without any task specific finetuning, the features can be used for geometric matching and temporal matching ( video-object-segmentation throught label propagation), with competitive performance compared to state-of-the-art.
Strengths: - The proposed idea is simple and effective
- The results are convincing. Having a single model applicable to many different correspondence tasks without specific architecture/training and outperform task-specific methods would be a useful contribution. The new perspectives that it opens are exciting.
- The paper overall reads well.
Weaknesses: 1) The authors do not comment on the run-time of the proposed approach. I expect it will be quite slow since the image needs to go through the reverse diffusion process.
2) Related to the above, relying on the diffusion model at inference time makes the method impractical in many applications. To remove the reliance on diffusion models at inference, would it be possible to train a feature predictor, using as ground-truth the features extracted from a pre-trained models? Have you done any experiments in this direction?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A) What is the run-time of the approach to extract feature on a single image?
B) I think details on the ‘architecture’ of the extracted features are missing. It is only said L.113-114 that “intermediate layer activations” are extracted from the unet, and that each depends on the task. I would appreciate to have some details there - what layers are extracted (for the different tasks), what resolution and channel dimension do they have, are they interpolated? Are features from multiple levels used? If so how are they aggregated?
C) What is the resolution of the extracted feature and at what resolution is the matching done? Is a similar resolution used in competitor works like DINO?
The resolution of the feature maps is of crucial importance for obtaining fine-grained/accurate correspondences. Since matching is done by computing all-to-all similarities between the two images, It has often limited previous approaches to small resolution because of memory constraint. Just using a higher resolution feature map would lead to a significant improvement in results, not necessarily related to the features themselves.
D) Some misclassification and missing comparisons in semantic matching:
D.1) CNNGeo and A2Net are unsupervised - they only use single images as supervision, as opposed to NCNet for example which requires pairs of similar images
D.2) Comparison to recent approches are missing. In particular PWarpC [Truong et al. CVPR 2022] obtained state-of-the-art results in weakly-supervised semantic matching (with extension to supervised)
E) Would it be possible to extract features from multiple layers of the diffusion model such as to build a feature pyramid (similar to VGG or ResNet)? This could potentially be used as a plug-in replacement for backbones in correspondence approaches with a decoder.
F) How many different passes through the network (ie different random noise) are used for an image?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback! Please see our response below:
**Run-time of DIFT**.
Please refer to point 2 of global rebuttal above.
Briefly speaking, DIFT is actually fast because it doesn’t need to run diffusion inversion thus only one network inference is involved (see the second last paragraph of Sec. 4.2). Also, since we only need the intermediate U-Net features, the inference process can stop immediately when reaching the desired block and doesn't need to run the following layers. For example, it only takes 203 ms for DIFT$\_{sd}$ vs. 231 ms for OpenCLIP on one single image. In addition, we also discuss the usage of diffusion inversion in the first paragraph of Appendix B.
**Train a feature predictor network to have faster diffusion features**.
Given the speed of DIFT is already pretty fast as mentioned above, we haven’t explored the direction to train a feature predictor network such that it can make DIFT even faster. But we believe the techniques in the field of knowledge distillation (e.g., train a smaller student network to distill knowledge from a much larger teacher network) could be generally applied to diffusion features, especially when latency of DIFT becomes a main bottleneck in practical usage.
**Implementation details of DIFT including hyperparameters and resolutions**.
We include the implementation details in Appendix C including the time step $t$ and network block index $n$ used in each task. We’ll add a pointer in the main paper in the future version. For input image size, feature map size and feature dimensions, see point 2 of global rebuttal above. Note that the image size used in DIFT and its competitor self-supervised feature is the same for each task, i.e., DIFT$\_{sd}$ vs. OpenCLIP, DIFT$\_{adm}$ vs. DINO, so the comparison should be fair. The per-pixel feature is extracted through bilinear interpolation on the feature map. We also include the code of DIFT (see `sd_featurizer.py` and `adm_featurizer.py`) in the submitted supplementary material.
**Misclassification and missing comparisons in semantic matching**.
Thanks for the correction and we will revise the tables accordingly and also include PWarpC as baseline.
**Aggregation of multi-layer diffusion features**.
For all the results in the paper, only single layer diffusion features are used. In our preliminary experiments, we also found that aggregating features from multiple layers could slightly improve the performance. However, this also inevitably introduces many design choices (e.g., how to do the aggregation) and hyper-parameters (e.g., the layer and time step t to aggregate and the weight on each of them), which could vary across different downstream tasks. Optimizing them could further improve the performance, but also entangles the quality of the features with the tuning of these design choices and hyper-parameters. Since the main focus of our paper is to demonstrate correspondence emerges from image diffusion without explicit supervision, we focus on the most simple technique and the raw off-the-shelf single timestep/layer diffusion features. We’ll leave how to improve the performance by constructing a multi-layer/timestep diffusion feature pyramid to future work.
**Number of network inference passes used for each image**.
It only takes one single network pass when extracting DIFT for each image. As mentioned in Line 128-130, to enhance the stability of the representation in the presence of random noise added to the input image, we extract features from multiple noisy versions with different samples of noise, and average them to form the final representation. As mentioned in the first paragraph (Line 470-472) of Appendix C, this is usually being done in a batch manner so only one network inference is needed per image (see the `forward()` function in `sd_featurizer.py` and `adm_featurizer.py` of the code in our submitted supplementary material). We usually sample 8 noise per image but sometimes shrink it to 4 when having memory issues. Changing from 8 to 1 noise per image only lead to very small performance drop, e.g., on SPair-71k, DIFT$\_{sd}$: PCK 59.5→57.9; DIFT$\_{adm}$: PCK 52.0→51.1.
---
Rebuttal 2:
Comment: Hi! It is now the discussion period. Please have a look at the rebuttal and let us know if it affected your thoughts on the paper in any way.
---
Rebuttal Comment 2.1:
Comment: The authors addressed my concerns. I will upgrade to accept.
---
Reply to Comment 2.1.1:
Comment: We're thankful that you've taken the time to read our response and raise the score! Your feedback is highly treasured, and we'll make the necessary revisions to our paper in its future version. | Summary: The paper proposes to use off-the-shelf generative networks based on denoising diffusion models to find local correspondences. The paper is extremely simple: instead of generating samples purely from random noise or doing some kind of image-based conditioning, the method just adds random noise to the input image and takes some intermediate level in the denoising U-Net as a dense feature map, from which it can extract sparse features. Feature matching is done via cosine distances. Keypoints may be provided (e.g. for semantic matching) or taken with an off-the-shelf keypoint detector, such as SuperPoint. The authors show that their approach works for high-level semantics (e.g. an eye across different animals or even species) and geometric correspondences, and can also track points across time in video sequences. Parameters such as the stage in the denoising process or which intermediate layers to use are chosen per dataset. There are some tricks (e.g. featuremaps are averaged over batches with different noise inputs), but that's about it.
Strengths: 1. Cool idea.
2. Very simple. The actual method section is a single paragraph and doesn't require any math. The paper is well written and easy to understand.
3. Very good results on multiple tasks, including semantic matching and tracking in video sequences, without training or fine-tuning.
Weaknesses: 1. My main complaint is that I find it hard to believe this approach will work well across arbitrary geometric changes. I have experience in this field and conclusions drawn from HPatches rarely translate to real scenarios. The paper gives very few results, which makes these experiments unreliable and difficult to trust (my guess is that this gap will disappear when the method is properly evaluated). For instance:
- What is the image size?
- How many points do you use?
- Do you use both the illumination and viewpoint splits for HPatches (the supplementary material suggests so)? If so, why not split the results by sequence type?
- Why not use the more standard MMA metric from D2-Net (which isn't perfect but is easier to understand), instead of estimating homographies? Why not both?
- Why not use modern baselines? The most recent method is from 2019 (R2D2): see e.g. DISK, PoSFeat, ALIKE, or SILK (citations below). (Note: CAPS is weakly supervised and works worse than the DISK variant supervised only with epipolar geometry, as far as I know.)
- Could you provide a simple precision/recall curve and compare it against traditional local feature methods? My intuition is that it would be worse.
- What happens if you add RANSAC?
I generally discount claims on this dataset and steer people towards evaluation benchmarks focusing on downstream tasks, such as visualocalization.net or the image matching challenge (https://image-matching-workshop.github.io/). I understand that this is likely beyond the scope of this paper, and I think it would be acceptable that it just shows these features work reasonably well for rigid matching, but if you want to claim that "though not trained using any explicit geometry supervision, DIFT outperforms prior state-of-the-art methods that utilize explicit geometric supervision signals designed specifically for this task, such as correspondences obtained from Structure from Motion pipelines" (L210-213), then you're going to have to substantiate that much better.
2. Given that the paper is so simple and the concept itself is very easy to explain, it would be nice to see more introspection. For instance, showing qualitative results using PCA or t-SNE to cluster the features (see for instance the videos from DINO v2) would help understand what's going on.
3. Important details are relegated to the supplementary material:
- The amount of noise used per dataset. It's surprising that, for instance, for geometric correspondence with Stable Diffusion the best results are with t=0, which means basically no noise (if I understand the formulation correctly; I looked at the code).
- The fact that for semantic correspondence per-class prompts are used. I think this is a reasonable assumption, but it should be mentioned. It also suggests the method may not work so well in non-object-centric images.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: The paper is cool and novel, but given the issues above I'll mark it as borderline (BA, since I only have BA and BR) and wait on the rebuttal and reviewer discussion.
- How do you deal with the loss in resolution, especially for geometric matching? The supplementary material says the method uses the 11th upsampling block out of 18 for ADM, and the 2nd block out of 4 for SD. Do you simply upscale the feature maps? And if so how do you achieve accurately localized correspondences?
- Do you choose per-dataset hyperparameters on a validation set or the test set?
- What is the computational cost and how does it compare with other methods?
- What does "after removing outliers" mean in fig 10 in the supplementary material? Why not show the raw matches and color-code them according to correct/incorrect?
- What happens without per-class prompts?
Notes:
- It's nice to see that the method is sensitive to the choice of t but varies smoothly (Fig. 5). It would be nice to see this in other datasets, though. And what about n?
- The images in Fig. 10 in the supplementary material have the wrong colors (I guess BGR, from loading/saving with OpenCV).
- Please add the dataset and metric to the headers in Tables 1 and 2 (one's missing the metric, the other two the dataset names).
Citations:
[DISK] https://arxiv.org/abs/2006.13566 (https://github.com/cvlab-epfl/disk)
[PoSFeat] https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Decoupling_Makes_Weakly_Supervised_Local_Feature_Better_CVPR_2022_paper.pdf (https://github.com/The-Learning-And-Vision-Atelier-LAVA/PoSFeat)
[ALIKE] https://arxiv.org/pdf/2112.02906.pdf (https://github.com/Shiaoming/ALIKE)
[SILK] https://arxiv.org/abs/2304.06194 (https://github.com/facebookresearch/silk)
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback! Please see our response below:
**Implementation details and evaluations on HPatches**.
- Image size: all images are resized to 768x768 then fed into the network to extract feature maps.
- Number of points: following CAPS, we use SuperPoint to extract keypoints and it has 1.7k points per image on average.
- MMA metric: we plot MMA curves in Fig. 4 of the attached pdf.
- Modern baselines: We'll include the suggested methods in the revised comparison table. We include the DISK variant trained with epipolar supervision only (DISK$\_{epi}$) in the MMA figure above and report their homography numbers below.
- Dataset splits: Tab. 3 shows the overall accuracy. Here’re the per-split numbers (also for DISK$\_{epi}$):
| Method | $\epsilon=1$ | $\epsilon=3$ | $\epsilon=5$ |
| ----- | ------ | ----- | ----- |
| Illumination | | | |
| DIFT$\_{sd}$ | 64.2 | 93.5 | 97.3 |
| DISK$\_{epi}$ | 66.9 | 93.1 | 96.9 |
| Viewpoint | | | |
| DIFT$\_{sd}$ | 28.9 | 63.2 | 72.9 |
| DISK$\_{epi}$ | 28.6 | 62.1| 75 |
- PR curve comparison with traditional local feature methods: using the same SIFT keypoints, we compare the mean Average Precision (mAP) of all image pairs using different feature descriptors with a threshold of 5 pixels: SIFT=74.9, DIFT$\_{sd}$=69.7, DINO=45.6. Although both DIFT and SIFT have high precision at high recall, the mAP for DIFT is lower because DIFT's scoring function is not optimized. DIFT achieves better MMA and homography accuracy than SIFT.
- Add RANSAC: given a pair of images, we first use cosine distance to find mutual nearest neighbor matches, then use `cv2.findHomography()` with `method=cv2.LMEDS` to remove outliers and calculate the homography transformation. Empirically, we find LMEDS works better than RANSAC.
- Loss of resolution: after getting the dense (but also lower-resolution) feature map, pixel-level features are extracted via bilinear interpolation. We use SuperPoint to localize the keypoints then extract their DIFT features to do matching. More details about image/feature map resolution are in point 2 of the global rebuttal.
To address the reviewer's concern, we will modify our claim: "though not trained with any explicit geometry supervision, DIFT still achieves competitive performance on HPatches compared to the methods explicitly trained for geometric correspondence with weak/epipolar supervision".
**More introspection of DIFT**.
Please refer to point 1 of the global rebuttal above, and Fig. 1 and 2 of the attached pdf.
- PCA visualization: we visualize the first 3 PCA components of DIFT$\_{sd}$ on segmented object pairs. As shown in Fig. 1, the object parts that share the same semantic meaning tend to have similar DIFT embeddings.
- Different t for different correspondence tasks: we visualize the predicted clean images with different time step t in Fig. 2 as an intuitive way to demonstrate that features contains more semantic-level information and less low-level details with the increase of t, which partially validate the usage of very small t for geometric correspondence.
**The usage of per-class prompt in semantic correspondence**.
Changing per-class prompt to a null prompt (empty string) will only lead a very small performance drop for DIFT$\_{sd}$ on SPair-71k: 59.5→57.6 as in Tab. 1 of the main paper.
We'll mention the usage of per-class prompt in the revised main paper. Note that only DIFT$\_{sd}$ needs a prompt as part of the input while DIFT$\_{adm}$ doesn’t. This per-class prompt is only used in semantic correspondence tasks, while null prompt is used for the others. The prompt design is simple and not image specific, only using class name, i.e., “a photo of a {class}”. For CUB, we actually use the same prompt for all images: “a photo of a bird”.
**Wrong color format in Fig. 10; Missing headers in Tab. 1 and 2; Important details in the supp**.
We apologize for the error. We'll fix this and table headers in the revision. We'll also add these implementation details in the main text plus a pointer to the appendix.
**Per-dataset hyperparameters**.
Most of the datasets used in the paper don't have a validation set, so we chose the two hyperparameters of DIFT (i.e., time step t and block index n) based on the test performance. But since our hyperparameter search on DIFT is pretty coarse, we do not believe our results are due to overfitting. Fig. 5 in the main paper and Fig. 5, 6 and 7 in the attached pdf show the performance for all possible $t$ and $n$ values; we will show similar plots for all tasks in the camera ready supplementary. Note that, for fair comparison, we also densely grid search which layer to extract feature for DINO and OpenCLIP for each dataset. Also as mentioned in Line 154-156, for semantic correspondence, the hyperparameter tuning is only done on SPair and fixed afterwards on other datasets.
**Computation cost**.
Please refer to point 2 of global rebuttal above.
Briefly speaking, DIFT takes similar running time as its competitor self-supervised features, e.g., on one single A6000, for each image, DIFT$\_{sd}$ takes 203 ms vs. OpenCLIP’s 231 ms. We'll also include these details in the revision.
**"after removing outliers" in Fig. 10**.
We only visualize the matches that are marked as "inliner" after applying `cv2.findHomography()`. We'll add figures with colored raw matches in the revision.
**Ablation on t and n**.
We ablate DIFT$\_{sd}$ with different t on HPatches and DAVIS, as in Fig. 5 and 6 of the attached pdf, where similarly to Fig. 5 of the paper, performance varies across t smoothly.
We also ablate which layer to extract DIFT$\_{sd}$ for SPair in Fig. 7 of the attached pdf, where layer index i is more fine-grained than block index n, i.e., SD U-Net has 4 upsampling blocks with 15 layers in total. The transition from n to i is 0/1/2/3 to 3/7/11/14 (both start from 0). We can see the performance varies but there's a wide range choice of i having good performance.
---
Rebuttal Comment 1.1:
Comment: Woops! I just noticed that I pasted things in the wrong fields when copying the text from my editor to openreview, and the summary twice (so things were shifted by one field). Sorry, I try to be careful with these things but I had many papers to review. I guess it was easy to understand and all the content was there, so no harm done. I have fixed this now.
**[Experimental details on HPatches]**
Thanks for clarifying this. I ask for these details because I've seen papers do very questionable things (e.g. resizing images to 320x240 and benchmarking on that, or running RANSAC with horrible defaults). The comparison with DISK is convincing.
At first sight it's a bit surprising that the performance for DIFT_{sd} is quite good at low pixel thresholds when the feature maps at are 1/8 the input resolution, but SuperPoint does the same for the feature maps, and the keypoints are extracted at the input resolution, so it makes sense. (You might have more issues discriminating the features with tighter non-maxima-suppression.)
I find it surprising than LMEDS works better than RANSAC (my experience is 100% the opposite), and this very likely won't hold on more challenging datasets. It is also possible that the authors are not tuning RANSAC well enough (it's difficult to do this well -- plus OpenCV 4 has newer RANSAC variants built-in that perform quite a bit better than vanilla RANSAC), but maybe HPatches is just easy enough and it doesn't make much of a difference.
**To address the reviewer's concern, we will modify our claim: "though not trained with any explicit geometry supervision, DIFT still achieves competitive performance on HPatches compared to the methods explicitly trained for geometric correspondence with weak/epipolar supervision".**
To clarify, my concern was not that you claim that the method works well on HPatches as much as on "geometric correspondence". HPatches is in my opinion not a good dataset to evaluate geometric correspondence, because it only has homographies (and half of it illumination changes, which is an even more specific problem). And I say this knowing that that's what many papers on geometric matching do (which I often reject, because again, it's not a representative problem).
The fact that the method works this well, off-the-shelf, on HPatches is really cool, and my reception of the paper might have been warmer if you had only included the other experiments. But I'm not going to penalize the paper for that. While I encourage you to acknowledge the limitations of the dataset, I'll raise my score to "weak accept" now that I can trust the evaluation, given the details the authors provided in the rebuttal.
**[Ablations on t on other datasets, and on n]**
Thanks, I think this material strengthens the paper, particularly since there's the focus of the paper is demonstrating how to use these off-the-shelf models for a different purpose (same reason why I think it's important to note the prompt, and what happens when you leave it blank). Consider my questions answered.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for reading our rebuttal and raising the score! We really appreciate your valuable feedback and suggestions especially on the geometric correspondence section. We'll revise our paper accordingly in the future version.
---
Rebuttal 2:
Comment: Hi! It is now the discussion period. Please have a look at the rebuttal and let us know if it affected your thoughts on the paper in any way. | Summary: The paper addresses a classical computer vision problem, ie, points correspondence. The authors show that the feature maps of the decoder of a diffusion model U-Net enable robust feature matching with a simple nearest neighbor search. Semantic or geometric correspondence can be achieved, by selecting the appropriate denoiser time step. The authors report results, quantitative analysis and comparison on several benchmarks, chosen for each specific task.
Strengths: The authors observe empirically that the features contained in the U-Net decoder are powerful features for image correspondence. They also show that those features embed different levels of semantic information, depending on the time step of the denoiser.
It is particularly relevant given the fact that diffusion models used by the authors are pretrained (the authors only add noise to the input image), and that the drawn conclusions are consistent over different DMs.
The set of experiments reported in the paper are thorough and results are convincing. The variety of the results, quantitative and qualitative, demonstrate the reliability and flexibility of such features.
I particularly enjoyed reading a paper presented with such simplicity and clarity.
Weaknesses: The drawback of the approach is probably the computational complexity that is intrinsic to any point matching approach relying on nearest neighbor search based on high dimensional features.
It is not clear from the paper (nor the supplementary material), if choosing the optimal time denoising time step is critical or not. A corollary of this question would be: do we expect the semantic level contained in the features to degrade smoothly from large time step to small ones?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1) In figure 3, it seems that the semantic information is such that the right ear feature is distinct from the left ear one. Is it a general case or does this happen in this specific example only due to a very similar pose in the source and the target image?
2) Did the authors try to apply clustering based on U-Net features?
3) Could the authors elaborate on the computational time. What is the dimension of a DIFT feature?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: the authors rightly mentioned the ethical issues related to generative models .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback! Please find our response as below:
**Clustering on U-Net features**.
Please refer to point 1 of the global rebuttal above and Fig.1 of the attached pdf, where we we visualize the first three PCA components of DIFT$\_{sd}$ on the segmented instance pairs and we can see that the same kind of object parts share similar feature embeddings. We'll include more such visualization in the future version.
**Computation time and feature dimensions**.
Please refer to point 2 and 3 of the global rebuttal above.
Breifly speaking, DIFT takes similar running time as its competitor self-supervised features, e.g., on one single A6000, for each image, DIFT$\_{sd}$ takes 203 ms vs. OpenCLIP’s 231 ms, DIFT$\_{adm}$ takes 110 ms vs. DINO’s 154 ms. The feature dimension depends on which U-Net layer is extracted from, and we list the details for every correspondence tasks above.
We'll also include these details in the future version of our paper.
**Is the choice of time step t critial?**.
In the main paper, we briefly discussed the influence of time step t in Sec. 5.2 Line 176-178 and Fig. 5 shows how the semantic correspondence performance varies across different t. It can be seen that curve is pretty smooth and there’s a large range of t where DIFT gives competitive performance. But the accuracy also decreases significantly when t becomes too large or too small.
As suggested by reviewer zLC3, we have added new experiments that ablate the choice of $t$ on the geometric correspondence (HPatches) and temporal correspondence (DAVIS) tasks in Fig. 5 and 6 of the rebuttal document respectively, where they share the similar observation as above.
Given these experiments, we would say the choice of t is definitely an important hyperparameter that user need to tune for specific downstream correspondence tasks, but the trend is quite smooth and the change is also not sharp, so it’s not hard to find a good enough t within a reasonable number of trials. Based on our experiments, intuitively we find that within a reasonable range (e.g., t cannot be too large, otherwise $x_t$ is too noisy), larger t usually leads to more semantic-level features and smaller t leads to features that contain more low-level image details. Users can also use this heuristics to help narrow down the search.
In addition to these new ablations and visualizations, Figure 2 in the rebuttal document attempts to give an intuition for the effect of $t$. Please see point 1 of the global rebuttal for further explanations of this figure.
**Do we expect the semantic level contained in the features to degrade smoothly from large time step to small ones?**
As mentioned in the above comments, based on the ablations and experiments we have, we think this is a good heuristics to have when tuning the time step t for downstream tasks. The smoothness could also be verified in the low-sensitivity of semantic correspondence performance to the choice of t, as in Fig. 5 of the main paper. In Fig. 2 of the attached pdf and point 1 of the global rebuttal above, we also give an intuitive explanation on why this would happen by visualizing the predicted clean image at different time step t.
**Distinction between symmetric object parts using DIFT**.
There’re a lot of such symmetric keypoints In SPair-71k/CUB/WILLOW (e.g., left vs. right ear/handler/wing, fore-wheel vs. back-wheel) and the prediction has to distinguish them from each other in order to be counted as correct. So the high accuracy numbers on these benchmarks actually indicate that DIFT is able to handle these cases pretty well in general. In addition to Fig. 3, the last two rows of Fig. 4 also demonstrate a few cross-category cases on distinguishing left/right paws/eyes. Especially in the last row, we can see even the pose between source and target are quite different, DIFT with simple cosine similarities is still able to give correct predictions.
**Large computational complexity that is intrinsic to any point matching approach relying on nearest neighbor search on high dimensional features**.
The main focus of our paper is to demonstrate that correspondence emerges from image diffusion models without explicit supervision. In order to show that, we use simple technique to extract the feature representation inside U-Net, paired with the most straightforward feature matching strategy (i.e., nearest neighbor using cosine distance) and it achieves competitive performance on several correspondence tasks. The simplicity of the matching mechanism actually further proves the good quality of diffusion features themselves.
But we agree that this nearest neighbor matching strategy is definitely not ideal, and more sophisticated designs from the feature matching literature could further boost DIFT’s performance and efficiency, e.g., a coarse-to-fine matching strategy. Meanwhile, in terms of efficiency in practice, note that DIFT is basically as fast as other off-the-shelf self-supervised features (e.g., DIFT$\_{sd}$ takes 203 ms running on a single input image, see more details in point 2 of the global rebuttal above). And when calculating cosine distance, we could L2-normalize each feature independently and then dot product each other, which is also very efficient in terms of time and memory, compared to other correspondence methods [27, 31] that use the attention mechanism to calculate the correlation. In addition, since DIFT feature is image-independent so we can pre-calculate it for each image and then save them in advance before doing the matching, whereas some methods [27, 31] that require a pair of images as network input cannot take advantage of this pre-processing, especially when the same image need to be queried multiple times.
We'll add this clarification into the future version of our paper.
---
Rebuttal Comment 1.1:
Title: Official comment by reviewer 4nEG
Comment: Thanks to the authors for the detailed rebuttal and clarifications regarding experimental setup. I believe that these clarifications, together with the additional experiments, will consolidate the paper. My concerns have been addressed and I maintain a recommendation for accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for investing time in reading our rebuttal and for responding with a higher score! Your feedback is truly cherished, and we'll revise our paper accordingly in the future version.
---
Rebuttal 2:
Comment: Hi! It is now the discussion period. Please have a look at the rebuttal and let us know if it affected your thoughts on the paper in any way. | Summary: This paper introduces DIFT, a method to yield emergent correspondence from image diffusion models without training or additional fine-tuning.
The method is simple - given an image (or an image pair), DIFT adds noise to the image to simulate the forward diffusion process, and pass it to the U-Net of a pretrained diffusion model to extract feature maps.
The authors discover that by simply computing the cosine similarity between the emergent feature maps, one can establish strong semantic, geometric and temporal correspondences without training.
Strengths: * The discovery of emergent correspondences from image diffusion is novel. The proposed method is surprisingly simple as well, and this opens new possibilities and research directions for future work.
* Strong performances on standard benchmarks of semantic correspondence, geometric correspondence and temporal correspondence.
* The writing is clear and easy to follow.
Weaknesses: * Missing evaluation of PF-PASCAL for semantic correspondence. This is not a critical drawback, as results on PF-PASCAL tend to be saturated.
* Incomplete implementation details. What was the image size / feature map size used to establish the correspondences? Image size is a critical factor in many image correspondence methods.
* Lack of latency and computation analysis. This is crucial to identify the applicability of DIFT to real world scenarios.
* Lack of rationale or analysis on how exactly the image diffusion models can yield such feature maps easily. While some motivation is provided in the introduction, it seems insufficient for the readers to fully understand how the correspondences are emerging from image diffusion. Analyzing some failure cases could be helpful.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses section. The idea, simplicity, and novelty of the paper is strong, but I believe the paper lacks the experimental details and analyses to be accepted as-is.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have included the ethical considerations in the paper, but not the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback! Please find our response as below:
**Missing evaluation on PF-PASCAL**:
Here’re the comparison on PF-PASCAL:
| Method | PCK@$\alpha_{img}$=0.1 |
| ----- | ----- |
| PWarpC [Truong et al. CVPR 2022] (see review by rXbs) | 87.6 |
| DIFT$\_{sd}$ | 84.6|
| OpenCLIP | 61.1 |
| DIFT$\_{adm}$ | 72.5 |
| DINO | 53.6 |
PWarpC is a state-of-the-art weakly-supervised method mentioned by Reviewer rXbs, trained on training images with only image-level class labels. We didn’t do evaluation on PF-PASCAL at the beginning because we found over half of (i.e., 302 out of 506) the test images actually also appear in the training set, which makes the benchmark numbers much less convincing, and also partially explains why the previous supervised methods tend to have much higher test accuracy on PF-PASCAL vs. PF-WILLOW (e.g., over 90 vs. around 70) even using exactly the same trained model. And this duplication issue of train/test images also gives huge disadvantage to the methods that are never adapted (either supervised or unsupervised) on the training set before evaluation. However, even in this case, DIFT still demonstrates competitive performance as well as huge gains vs. other off-the-shelf self-supervised features. We’ll add this clarification in the future version of the paper.
**Latency of DIFT**:
Please refer to point 3 of the global rebuttal above.
Breifly speaking, DIFT takes similar running time as its competitor self-supervised features, e.g., on one single A6000, for each image, DIFT$\_{sd}$ takes 203 ms vs. OpenCLIP’s 231 ms, DIFT$\_{adm}$ takes 110 ms vs. DINO’s 154 ms. We'll include these details in the future version of the paper.
**Lack of experiment details, e.g., image size, feature map size**.
The input image size varies across different tasks but we always keep it the same within the comparison vs. other off-the-shelf self-supervised features (i.e., DIFT$\_{adm}$ vs. DINO, DIFT$\_{sd}$ vs. OpenCLIP) thus the comparison should be fair. For every correspondence tasks, we include the details of image size and feature map size/dimension in the point 2 of global rebuttal above. We also included implementation details in Appendix C and the code of DIFT (see `sd_featurizer.py` and `adm_featurizer.py`) in the submitted supplementary material. We'll include above details either in the main paper or add a pointer to the supplementary.
**Lack of analysis on why correspondence emerges from image diffusion**.
Please refer to point 1 of the global rebuttal above.
We provide two more visualizations in the attached pdf to help audience better understand how DIFT works. In Fig. 1, we visualize the first three PCA components of DIFT$\_{sd}$ on the segmented instance pairs and we can see that the same kind of object parts share similar features. In Fig. 2, we visualize the predicted clean images at different time step $t$ and we can see that, within a reasonable range, when $t$ gets larger, the predicted clean images remain the overall structure but have less details, suggesting DIFT contains more semantic-level information and less low-level features with the increase of $t$. It also explains that we need a small $t$ for correspondence tasks that require details and relatively large $t$ for semantic correspondence.
We believe that the diffusion training objective (i.e., coarse-to-fine reconstruction loss) requires the model to produce good, informative features for every pixel. This is in contrast to DIFT outperforms DINO and OpenCLIP that use image-level contrastive learning objectives. In our experiments, we have attempted to evaluate the importance of the training objective by specifically comparing DIFT$\_{adm}$ and DINO in all our evaluations: two models that share exactly the same training data, i.e., ImageNet-1k without labels.
We'll include these figures and analysis in the future version of the paper.
**Analyzing failure cases**.
Please refer to Fig. 3 in the attached rebuttal pdf file for the failure case visualization, where we randomly select two examples from the SPair-71k category with the lowest per-class accuracy (i.e., boat and bottle). We can see that, when the semantic definition of key points are ambiguous or the appearance change between source and target images are too dramatic, DIFT$_{sd}$ fails to predict the ground-truth labels. We'll include more visualizations and analysis on different failure cases in the future version of the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their detailed responses to my concerns. I believe that by reflecting these responses (and the responses to the other reviewers' concerns), the manuscript will be substantially improved. I would like to improve my rating to accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for reading our rebuttal and getting back to us with a higher score! We appreciate the valuable feedbacks you have given and we will revise our paper accordingly in the future version.
---
Rebuttal 2:
Comment: Hi! It is now the discussion period. Please have a look at the rebuttal and let us know if it affected your thoughts on the paper in any way. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and valuable feedbacks! We are encouraged that reviewers find that our paper is well-written and easy to follow (u3f5, 4nEG, zLC3, rXbs), that our approach achieves good performance with a simple technique (all 5 reviewers!) along with cool and novel ideas (u3f5, zLC3). Here, we answer a few global questions below:
**1. Understand why correspondence emerges from image diffusion, and why it outperforms previous backbones like DINO, OpenCLIP**.
We agree that understanding why correspondence emerges is an intriguing question. Inspired by reviewers' suggestions, we add two more visualizations in the attached pdf. In Fig. 1, for each pair of images, we extract DIFT$\_{sd}$ from the segmented instances, then compute PCA and visualize the first 3 components, where each component serves as a color channel. We can see the same object parts share similar embeddings, which also demonstrates the emergent correspondence. To further explore how this correspondence depends on $t$, in Fig. 2, for the same clean image, we first add different amount of noise to get different $x_t$ following Eq. 1, then feed it into network $\epsilon_\theta$ together with time step $t$ to get the predicted clean image $\hat{x}_0^t=\frac{x_t - (\sqrt{1 - \alpha_t}) \epsilon
_\theta (x_t, t)}{\sqrt{\alpha_t}}$. We can see that, with the increase of $t$, more and more details are removed and only semantic-level features are preserved, and when $t$ becomes too large, even the object structure is distorted. Intuitively, this explains why we need a small $t$ for correspondences that requires details and a relatively large $t$ for semantic correspondence. We'll include more examples of both visualizations in the future version.
As a more general explanation for the emergent correspondence, we have the following conjecture. We believe that the diffusion training objective (i.e., coarse-to-fine reconstruction loss) requires the model to produce good, informative features for every pixel. This is in contrast to DIFT outperforms DINO and OpenCLIP that use image-level contrastive learning objectives. In our experiments, we have attempted to evaluate the importance of the training objective by specifically comparing DIFT$\_{adm}$ and DINO in all our evaluations: two models that share exactly the same training data, i.e., ImageNet-1k without labels. A rigorous evaluation of our conjecture would be very interesting, but we lack the tools to perform such rigorous analysis: how to understand and explain why certain properties emerge from deep neural networks trained on large-scale data remains an unsolved research problem. We will add this discussion to the camera ready.
**2. Input image size, feature map size and feature dimension used in each correspondence task**.
The input image size varies across different tasks but we always keep it the same within the comparison vs. other off-the-shelf self-supervised features (i.e., DIFT$\_{adm}$ vs. DINO, DIFT$\_{sd}$ vs. OpenCLIP) thus it should be fair. For DIFT, feature map size and dimension also depend on which U-Net layer features are extracted from. ADM’s U-Net has 18 upsampling blocks and SD has 4 upsamping blocks (the definition of blocks varies). Below for each tasks, we list all the image sizes, block index $n$ ($n$ starts from 0), the size and dimension of feature maps. The per-pixel feature is extracted through bilinear interpolation on the feature map. The code of DIFT (see `sd_featurizer.py` and `adm_featurizer.py`) is in the supplementary.
For semantic correspondence tasks, we use the input image size of 512x512 for DIFT$\_{adm}$, 768x768 for DIFT$\_{sd}$. n=4 for DIFT$
_{adm}$ so feature map size is 1/16 of input and dimension is 1024. n=1 for DIFT$\_{sd}$ so feature map size is 1/16 of input and dimension is 1280.
For experiments on HPatches, input image size is 768x768 for both DIFT$\_{adm}$ and DIFT$\_{sd}$. n=11 for DIFT$\_{adm}$ so feature map size is 1/2 of input and dimension is 512. n=2 for DIFT$\_{sd}$ so feature map size is 1/8 and dimension is 640.
For experiments on DAVIS, we use the same original video frame size (480p version of DAVIS, specific size varies across different videos) as in DINO [5]’s implementation, for both DIFT$\_{adm}$ and DIFT$\_{sd}$. n=7 for DIFT$\_{adm}$ so feature map size is 1⁄8 of input and dimension is 512. n=2 for DIFT$\_{sd}$ so feature map size is 1⁄8 of input and dimension is 640. For experiments on JHMDB, following CRW [29]’s implementation, we resize each video frame’s smaller size to 320 and keep the original aspect ratio. n=5 for DIFT$\_{adm}$ so feature map size is 1⁄8 of input and dimension is 1024. n=2 for DIFT$\_{sd}$ so feature map size is 1⁄8 of input and dimension is 640.
We'll add the above details to the future version of the paper.
**3. Latency of DIFT vs. its competitor self-supervised features**.
Since we only perform a single inference step when extracting DIFT features, it actually takes similar running time compared to its competitor self-supervised features with the same input image size. Taking semantic correspondence with the above configuration for example, on one single A6000, for each image, DIFT$\_{sd}$ takes 203 ms vs. OpenCLIP’s 231 ms, DIFT$\_{adm}$ takes 110 ms vs. DINO’s 154 ms.
In practice, as mentioned in the Line 128-130 of Sec. 4.2, since there is randomness when extracting DIFT, we actually use a batch of random noise to get an averaged feature map for each image to slightly boost stability and performance, which would increase the running time shown above. But if computation is a bottleneck, one can remove this optimization at the cost of a tiny loss in performance: e.g., on SPair-71k, DIFT$\_{sd}$: PCK 59.5→57.9; DIFT$\_{adm}$: PCK 52.0→51.1.
We'll add the above clarification to the future version of the paper.
Pdf: /pdf/c146d7dc523bb64e164b62a1b876fb2781a0a607.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors proposed a method for semantic correspondence using pretrained diffusion model as a feature extractor of the images. Without explicitly training on the additional data/annotations, a simple feature matching based on winner-take-all strategy with cosine distance metric surpasses the previous works on three different tasks; semantic/geometric/temporal correspondence.
Strengths: - Exploration on the usage of diffusion model for correspondence tasks
- Good performances just with frozen pretrained model & simple matching pipeline
Weaknesses: 1. Limited novelty
- Merely replacing the backbone network for feature extraction with the latest model in a straightforward manner does not captivate my interest. It also fails to provide new insights to the readers. It would be beneficial to delve deeper into why these pretrained diffusion models outperform previous backbones like DINO, CLIP, or ResNet.
2. Limited demonstrations
- The paper lacks several essential ablation studies that explore architectural design choices. These studies could include comparing models with and without a decoder, evaluating different options for Q/K/V selection, and assessing the impact of finetuning on correspondence datasets.
- Given the relatively low originality, the paper would benefit from showcasing more diverse pixel-level prediction tasks. This could involve demonstrating the model's performance in object detection, image/ video segmentation, as well as other correspondence tasks such as depth estimation and optical flow. These additional demonstrations would significantly strengthen the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be interesting to see how the performances can be pushed further when sophisticated optimization techniques are applied to the raw matching similarities d(f^1,f^2).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations and potential societal impact of this work are appropriately discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback! Please find our response as below:
**Limited novelty**.
We would like to point out that the "backbone" we are proposing to use comes from a diffusion model which is trained with a generative modeling objective that has prima facie little to do with learning good features. Thus the fact that correspondence emerges from this generative training and can even outperform techniques specifically designed for representation learning (DINO, OpenCLIP) is a surprising finding. Our simple approach for extracting this correspondence suggests that this is something intrinsic to the diffusion training objective. We think it's valuable to share this evidence with the community so that more interesting research directions could be explored, such as understanding why diffusion objectives are more suitable for capturing correspondence, and perhaps rethinking the diffusion model as a self-supervised learner.
**Explain why pre-trained diffusion models outperforms previous backbones like DINO, OpenCLIP**.
We agree that more insights would be useful. As mentioned in the global rebuttal (point 1), we have added two new visualizations that we hope can shed some light on the features. We also have a conjecture that the diffusion model training objective (i.e., coarse-to-fine reconstruction loss) requires the model to produce good, informative features for every pixel compared to contrastive learning objectives (i.e., loss on image-level global feature). In our experiments, we have attempted to evaluate the importance of the training objective by specifically comparing DIFT$\_{adm}$ and DINO in all our evaluations, where they share exactly the same training data, i.e., ImageNet-1k without labels. We will add this discussion to the camera ready.
**Evaluating more pixel-level prediction tasks**.
We indeed evaluate on video segmentation (DAVIS) and human keypoint tracking (JHMDB) in Sec. 6.2 and DIFT shows competitive performance although never being trained on such supervision or even video data. We have now also evaluated on image segmentation, where we freeze the backbone and only train a linear head on top of different feature extractors.
The mIOU results on ADE20k val set (single-scale evaluation) are as follows:
| Method | mIOU |
| --------- | -------- |
| DIFT$\_{adm}$ | 29.2 |
| DIFT$\_{sd}$ | 34.5 |
| DINO | 31.1 |
| OpenCLIP | 40.9 |
Note that this task is different from correspondence tasks with feature matching because it requires training a head with downstream supervision thus introduces additional parameters and hyperparameters. We performed an additional experiment where we evaluated the off-the-shelf feature correspondence using COCO image segmentation labels (as a precursor to object discovery; see Fig. 3 in STEGO [20]):
| Method | AP |
| --------- | -------- |
| DIFT$\_{adm}$ | 80 |
| DIFT$\_{sd}$ | 78 |
| DINO | 79 |
| OpenCLIP | 65 |
DIFT achieves competitive performance on both these tasks.
Unfortunately for depth estimation and optical flow, current pipelines (e.g., RAFT) usually involve a complex global optimization that is beyond simple local feature matching.
**Exploring more sophisticated design choices or optimization techniques to further improve the correspondence performance**.
In our preliminary experiments, we do find that more sophisticated task-specific adaptation mechanism could further enhance the diffusion features, e.g., finetuning the network with specific downstream correspondence supervision. However, this would inevitably entangle the quality of the features themselves with the efficacy/strategy/hyperparameters of the adaptation procedure. To keep the focus on the evaluation of the representation itself, we chose to avoid any additional adaptation to demonstrate the raw quality of the off-the-shelf DIFT. Also see the discussion (Line 458-464) in Appendix B.
“Comparing models with and without a decoder”: if "decoder" means the VAE decoder of the diffusion model, we indeed compared two types of diffusion models in all the evaluations: SD is a latent diffusion model with VAE decoder while ADM is directly on pixel space without en/de-coders. But note that for DIFT$\_{sd}$, the decoder is actually not used during inference; if “decoder” means whether to attach a task-specific decoder for downstream correspondence tasks, we chose not to do that because it will require finetuning with supervision, see the discussion above.
“Evaluating different options for Q/K/V selection”: our current implementation follows the most straight-forward way to get the intermediate feature map after each resnet-attention upsampling blocks inside U-Net (see the class `MyUNet2DConditionModel` of `sd_featurizer.py` in our submitted code). There contains multiple self-attention and cross-attention layers within each SD U-Net attention block. Delving into it and exploring key/value/query tokens as representation could potentially give performance boost, but also complicates the design choice and usage (different tasks require different set of hyperparameters).
---
Rebuttal 2:
Comment: It is now the discussion period! Please have a look at the rebuttal and let us know if it affected your thoughts on the paper in any way! | null | null | null | null | null | null |
CaMP: Causal Multi-policy Planning for Interactive Navigation in Multi-room Scenes | Accept (poster) | Summary: This paper introduces the multi-room interactive navigation problem and proposes a novel model that is motivated by counterfactual reasoning. In particular, the paper posits that obstacle objects serve as a confounding factor when understanding the relationship between actions taken and the outcomes observed / reward received. To address this, a counterfactual reasoning based model is proposed, which explicitly encourages exploring actions outside the distribution predicted by the policy (i.e., what would happen if I did action X instead of Y?). The proposed model is hierarchical in nature, with low-level policies for navigation, picking, and pushing skills, and higher-level policies for selecting a skill to execute. The higher-level policy is conditioned on an intent, specifying where the policy is likely navigating to next. This is expected to allow the policy to account for the intent and explore actions that go against the intent. Results on the ProcThor simulated dataset demonstrate the superiority of the proposed policy over alternative baselines.
Strengths: * The idea of performing counterfactual reasoning is interesting and novel in the context of embodied AI.
* The problem setting proposed is a good extension to prior work on single-room interactive navigation and is valuable for the community to work on.
* The paper clarity is good, but it makes assumption about how knowledgable the reader is with causal inference (see weaknesses).
* The experiments are well designed and ablation studies convey useful information to understand the overall model. The proposed model also performs much better than reasonable baselines.
Weaknesses: # Post-rebuttal comments
* The authors have sufficiently addressed my concerns and provided new experiments to quantify the improvement in interaction-ability. I'm happy to raise my rating to accept.
--------------------------------------------------------------------------------------------------
# Paper writing clarity
* The paper writing clarity can be improved quite a bit with regards to causal inference. Since this is a relatively new topic to the embodied AI space, most readers may be unfamiliar with the topic and jargon like "counterfactual reasoning", "confounding factors", "structural causal model", "structural functions", etc. also, L129 was not obvious from my first reading (i.e., the difference between do(A) and A).
* More clarity can be provided about what the counterfactual situations here are (e.g., more examples like L131-133).
* In Figure 3, is the entire model differentiable? For example, are gradients from the loss propagated through to intent predictions w_t^{'} and a_t^{'}?
* In the experiments, an explicit connection should be made to how the proposed model is using counterfactual reasoning and how it addresses the issue of confounding factors.
# State vs. obstacles as confounding factors
The idea of treating obstacles as confounding variables makes sense. But at a high level, isn't the state variable itself a confounding factor? What is the value is isolating only obstacles here?
# Experiment section can be improved
* Error bars are missing in Tables 3 and 4. It will be useful to have results from training and evaluating on multiple seeds, especially for the top-3 methods (NIE, PPO+intent, CaMP). Similar issue for Table 4.
* In Table 4, why is "sync. /3*rollout" worse than "integrated intent"? Aren't they both the same models?
* In Table 4, why is "sync. /epoch" worse than "wo/intent"? If CaMP is synchronized frequently, I would expect the intent to be selected as the action itself, leaving little to no room for counterfactual exploration. So "sync. /epoch" should match "wo/intent", right?
* There is no analysis on how well each models interact with the "relevant" obstacles, i.e., obstacles that lie along the shortest path and can be moved. This can be measured via precision and recall metrics.
* Precision = fraction of interacted objects that are "relevant" obstacles
* Recall = fraction of "relevant" obstacles interacted with
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Kindly address the weaknesses stated above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: No limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the appreciation and suggestions for our work. We address the concerns in the following lines.
Q1. Paper writing clarity. The reviewer questions about "the difference between do(A) and A", "what the counterfactual situations here are (e.g., more examples like L131-133)", and "In Figure 3, is the entire model differentiable?"
A1. Thanks for the suggestion and we will revise our paper to further elaborate on the causal preliminaries. Our replies to the questions are as below:
* Do-calculus $do(A=a)$ is an intervention operation that actively sets the variable $A$ as $a$ in the system while keeping the rest of the system unchanged, and the postintervention distribution $P(R|do(A=a))$ is obtained. On the contrary, the conditional distribution $P(R|A=a)$ is observed passively in the condition when $A$ appears to be $a$. Thus the actions executed by embodied agents can naturally be regarded as interventions do(A) to the environment. Formally in our case (Figure 2(a)), distribution $P(R=r|do(a))=\sum_{x} P(r|a,s)P(s)$, while $P(R=r|a)=\sum_{x} P(r|a,s)P(s|a)$. Since the agent decides the action $a$ based on the observation of state $s$, the distributions of $a$ and $s$ are not independent, which results in $P(s)\neq P(s|a)$ and $P(R=r|do(A=a))\neq P(R=r|A=a)$.
* The counterfactual situation we discuss $R_{A=a}|A=i$ can be understood as the result the agent would obtain had it taken action $a$, given that it intends to take $i$. For example, in the initial stage of training, when encountering a table the agent may intend to bypass the obstacle since it lacks interaction skills and interactions empirically lead to low rewards. With the help of counterfactual policy, the agent may explore the counterfactual situation when it tries to push the table aside given its intent is to rotate right. Once the interaction successfully clears the path, the agent will obtain a relatively higher reward than navigation, which becomes a valuable experience for the agent to learn interactive strategy.
* The entire model is differentiable, because the integrated intent $i_t$ is obtained with
the weighted-sum of low-level action distribution. However, we do not update the intent network with gradients from the loss. Instead the parameters of the intent network are updated by synchronization with the master policy.
Q2. State vs. obstacles as confounding factors. The reviewer questions that "But at a high level, isn't the state variable itself a confounding factor? What is the value is isolating only obstacles here?"
A2. We address obstacles in isolation since they are crucial Unobserved Confounders (UC) with negative impact while the distribution of state is observable through visual observations, although state $S$ indeed is a confounder of causality $A \rightarrow R$. In essence, the agent aims to learn and optimize the state-specific causal effect $E[R|do(A=a),S=s]$ (L127), which represents the reward expectation when the agent takes action $a$ observing state as $s$. Therefore the causality $S \rightarrow R$ in Figure 2 is also useful for policy learning and should be considered. However, the distribution of obstacles can not be measured by policy $\pi(s)$, which is harmful to the learning of effective policy.
Q3. Experiment issues. The reviewer argues that "Error bars are missing in Tables 3 and 4.", "There is no analysis on how well each models interact with the "relevant" obstacles", and questions about the model performances.
A3. Thanks for the concerns and we address them by items as follows:
* We report the performances of models (in Table 3,4) by averaging their scores on 5 tests with the same random seed. We now evaluate several methods on **multiple seeds** in the table below (in the form of *mean* $\pm$ *variance*). The results show that the methods obtain stable performances on multiple seeds.
* In the ablation study, we vary the sync frequency on model "ppo+intent" (L264) in order to study the effect on counterfactual policy and remove the influence of applying multi-policies. Thus in Table 4, "sync./3*rollout" corresponds to "ppo+intent" while "integrated intent" is identical to CaMP.
* According to the clarification above, the "sync. variants" are modified based on "ppo+intent" while the "intent variants" are modified based on CaMP in Table 4, which makes the performance of "sync./epoch" relatively worse than "wo/intent".
* In InterNav, the agent attempts to interact with the "relevant" obstacles for the purpose of navigating more efficiently along a shorter path. Thus how well the model interacts with obstacles actually lies in how much the interactions shorten the path. To that end, we apply SPL metric to measure the length of the path the agent takes to complete the task.
Furthermore, to answer the question "Does the shortest path mean the most efficient navigation?", we consider an example where a large-size table lies in front of the agent on the shortest path, with an apple on the right side of the table. Obviously, moving the table aside is harder than picking up the apple and may be less efficient. Thus we additionally evaluate a metric STS (Success rate weighted by Time Steps, see our reply to *reviewer HVkt* for more details) to analyze the time efficiency of task completion in the table below. The results on SPL and STS show that our method obtains better path and time efficiency compared with the baselines.
|Methods|SR (%)|SPL($_{\pm e-5}$)|FDT($_{\pm e-3}$)|STS($_{\pm e-5}$)|
|:----:|:----:|:----:|:----:|:----:|
|PPO|42.3$_{\pm 0.57}$|0.249$_{\pm 1.98}$|4.86$_{\pm 1.56}$|0.139$_{\pm 0.87}$|
|NIE|51.3$_{\pm 1.01}$|0.290$_{\pm 2.45}$|3.86$_{\pm 4.22}$|0.157$_{\pm 2.22}$|
|PPO+intent|53.0$_{\pm 1.93}$|0.296$_{\pm 2.20}$|3.80$_{\pm 1.71}$|0.163$_{\pm 1.14}$|
|CaMP|**56.7**$_{\pm 1.19}$|**0.309**$_{\pm 2.75}$|**3.71**$_{\pm 1.97}$|**0.176**$_{\pm 1.02}$|
---
Rebuttal Comment 1.1:
Title: Reviewer response to rebuttal
Comment: I thank the authors for their detailed and helpful rebuttal responses. Quite a few of my concerns are alleviated. I have one follow-up question, and I will decide my final rating based on the above responses and responses to the follow-up question.
## Detailed comments to each response
* Q1 - thank you for the clarification. Please incorporate these in the paper to strengthen further.
* Q2 - I understand the point that the authors are making. The problem here is using “state” to refer to the agent’s observations is a misnomer. By definition, the state must encompass everything there is to know about the world and the agent, and includes the obstacles as well. In this work, “state” is referred to as the limited part of the state called “observations”, and that may not include the obstacle information. It would be great if this can be clarified in the final paper too.
* Q3a - Thanks for the random seed experiments. Please bold results based on significance testing. It looks like only SR and STS improvements are statistically significant relative to NIE / PPO+intent. SPL and FDT are comparable. I expect the gains to be “significant “for the harder case.
* Q3b,c - Good point. My bad, thanks for clarifying.
* Q3d - I agree that the numbers are better with intent (e.g., PPO vs. PPO+intent). I am just wondering if there are more direct metrics that can quantify how well the agent learns to interact with objects due to causality. E.g., the PPO+intent agent does not try to lift heavy objects and moves them instead — unlike PPO, the PPO+intent agent avoids interactive with large and heavy objects and moves around them instead.
## Follow-up question
* Could the authors please define more direct metrics that can quantify the outcomes of the proposed model (other than the indirect task-performance metrics)? The authors have motivated the need for causal reasoning, but whether any of these outcomes are achieved is unclear. I have given some examples above in the detailed comments.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the recognition and further concern. We will incorporate the clarification (Q1) and the results (Q3) of our rebuttal in the final paper. Our response to the further comment and the follow-up question is as below.
* **Response to further comment on Q2:** Thanks for pointing that out. It's worth clarifying in the final paper that the policy $\pi(a_t|s_t)$ takes in the observation of the state rather than the whole state information. In the causal model we define in Figure 2, the node $S$ denotes the state encompassing all information of the environment, which includes all potential obstacle information (node $O$). However, the state and obstacles causally influence agent's actions in a partially observable way through visual observations (i.e. $S \rightarrow A$, $O \rightarrow A$). And we focus on addressing the crucial UC (obstacles) instead of all unobserved states in our work.
* **Response to the follow-up question:** We agree that the metrics of overall performance (SR, SPL) are indirect and not enough for measuring the interaction outcome of our method. Contrary to acting intuitively, our causal agent learns to take "wiser" interactions than its intuition. Thus for verification, we evaluate PPO and PPO+intent on several additional metrics in the table below: (1) **Interaction Reward (IR)**: the reduction of geodesic distance $\Delta_{dis}$ caused by interactions. A case of high SR is that the agent pushes a table aside (rather than forward) to clear the path, knowing it will help complete the task better. (2) **Interaction Success Rate (ISR)**: the ratio of taking interactions successfully (a more detailed metric compared to the overall metric SR measuring the task completion). A failed case of interaction is that the agent tries to pick up a heavy table or attempts to push a chair far away, without realizing the feasible interactions in the current situation. We also report the success rate of *Push* and *Pick* interactions separately as **PuSR** (*Push* Success Rate) and **PiSR** (*Pick* Success Rate). Through the metrics above, we measure the outcome of our causal method in pursuing the interaction efficacy of reducing the distance to the target (IR) and avoiding invalid actions (ISR). The results show that compared with a standard policy, our intent-aware policy gains a stronger ability to take effective and appropriate interactions.
|Methods|IR($_{\pm e-5}$)|ISR(%)|PuSR(%)|PiSR(%)|
|:----:|:----:|:----:|:----:|:----:|
|PPO|0.271$_{\pm3.54}$|23.2$_{\pm1.12}$|23.8$_{\pm1.18}$|21.9$_{\pm0.97}$|
|PPO+intent|0.371$_{\pm2.28}$|27.0$_{\pm1.27}$|25.1$_{\pm1.44}$|28.6$_{\pm1.22}$| | Summary: Broadly, the paper tackles the Interactive Navigation task: navigating to a goal and interacting with obstacles as necessary, e.g. pushing a chair out of the way.
They use the ProcTHOR simulator with 12k multi-room scenes and generate navigation episodes that are suitably cluttered with obstacles. Their embodiment is abstract, with a discrete high-level action space (e.g. PushLeft, PickUp, Drop, RotateLeft).
Their approach is a hierarchical model, with three pre-trained low-level "action policies" (skills): navigate, push, and pick. A master policy sequences the three. Specifically, the navigation policy is run by default; the master policy can interrupt it by invoking an interaction policy (push or pick); the interaction policy then runs exclusively until it self-terminates, returning control to the master policy.
Their main contribution is the addition of an intent policy; they claim this helps the master policy make better decisions in the presence of obstacles. I'll describe this more below.
They evaluate their approach against several baselines:
* random actions
* a monolithic (non-hierarchical) sensors-to-actions policy trained with end-to-end RL, labeled as PPO
* a prior approach labeled as NIE that aims to predict state change of observed objects
* a hierarchical policy that uses the same nav/push/pick action subspaces as CaMP but differs in how they are combined
* a monolithic sensors-to-actions policy with the addition of CaMP's intent policy
They also study ablations related to the intent policy.
I'll now describe the intent policy in detail, with the caveat that I'm looking for further clarification from the authors here (see Questions section).
The architecture of the intent and master policy are mostly identical; their inputs include goal embedding and extracted visual features and they feature recurrent units.
At timestep t, the intent policy outputs w'.t, a distribution over the three action policies, essentially choosing between push/pick/navigate. Meanwhile, each action policy j ouputs a'.t^j, a distribution over discrete actions. Finally, the intent is the sum of all a'.t weighted by w'.t. For example, if the intent policy on a given step is biased towards push and the push action policy is biased towards the PushLeft action, the overall intent will be biased towards PushLeft.
This intent is fed to the master policy alongside observations (together, "intent-specific state"), which chooses an action policy (either allowing the nav policy to continue, or interrupting by invoking push or pick).
The intent policy and master policy generally share parameters and generally behave similarly, except the intent policy is synced from the master policy only periodically during training, such that in practice the intent will differ from the taken action some of the time.
They describe this intent-informed policy as "exploring counterfactuals", and a component here is learning a value function for the above-mentioned intent-specific state.
Strengths: The InterNav task is a challenging, relevant task for Embodied AI.
The task, scene/episode dataset, and model architecture are clearly communicated in figures and text, with minor exceptions noted below.
The scenes and episodes appear diverse and high-quality.
Baselines and ablation study are rigorous.
The authors show strong performance against their baselines.
Weaknesses: There's ambiguity on some paper details. I'll list bullets here; see my questions for more details:
= action space and positioning of objects
= object dynamics in ProcTHOR
= details of invoking and terminating interaction subtasks
= learning for the intent policy parameters
The paper doesn't discuss sim-to-real transfer or otherwise discuss how CaMP might be applied in a real-world setting.
The biggest weakness is that the intuition and conceptual value of the intent component aren't clearly articulated. For details, see my questions below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In Figure 5 and in your video at 0:33, we see the agent making only 90-degree turns. However, the PPO baseline visualized in Figure 5 shows other turn angles. Can you explain this? Is your CaMP agent trained using a more restricted action space (versus action space in baselines)? If so, does this make your comparison results to baselines less clear? I would speculate that restriction to 90-degree orientations would simplify learning particularly in your scenes, which appear to be mostly axis-aligned (Figure 4; walls and most furniture). And a related question: 5.1 mentions "adjacent nodes". Is there a discrete structure to the position of objects, e.g. a grid?
Can you explain a bit about the object dynamics of ProcTHOR, specifically, how does the simulator decide which objects are nonmovable, pushable, or pickable? And can you speculate on how the trained CaMP policy is able to predict this? I.e. is it overfitting to a specific object set, generalizing based on apparent object size in the image sensors, or something else?
>Once an interaction policy is called, it does not return the control until the sub-task termination (output Done).
Can you elaborate on how this works? For example, does the master policy include a recurrent unit, and does this unit receive continuous observations even while an interaction sub-task is running? Related: it would help to modify Figure 3. As-is, Figure 3 suggests that the master policy is free to select any action policy on any step.
>To ensure that the intent represents the decision of the agent, we synchronize the parameters of two networks (π'Ω ← πΩ) during training.
In between these syncs, is the intent policy being updated during the learning phase of PPO, or is it frozen?
Can you discuss sim-to-real? How might CaMP be used in a real-world setting?
Related to sim2real, can you also address one specific concern? First, a few assumptions:
= you wish to keep the same hierarchical architecture
= you could build low-level push/pick/nav policies that work on a real robot
= you require most training of the master policy to happen in simulation due to the required scale of experience-collection
My specific sim2real concern: from 0:29 in the video, it appears that the push subtask doesn't move the agent or change its view (it's a "magic push"). The agent isn't required to approach the object in a certain way or move itself to effectively push the object. So, I speculate that this makes it remarkably easy for the master policy to chain navigation and pushing: the navigation policy essentially resumes after push completion from exactly its last pose, almost as if the obstacle were teleported away. Chaining real-world nav and push subtasks in this way would likely fail due to the push task moving the agent.
I'd like greater clarity in section 3.2, on counterfactual decision-making and the basis for your intent policy and overall causal policy. A few specific questions:
>Trained based on its own intent, the agent can obtain both experimental experiences (when a.t = i.t) and counterfactual experiences
(when a/t != i.t), boosting the exploration of new strategies
Can you elaborate? How does the causal policy architecture "boost" this exploration? How does this produce better exploration than, say, simply tuning temperature on a stochastic policy?
>The intent also provides context about the obstacles due to their causal relation
Can you elaborate? Can you experimentally verify/quantify what context the intent is providing about obstacles?
>Intuitively, an agent that explores different counterfactual situations (e.g. "what if I push/pick up the box instead of bypassing it?") has a better understanding of the expected value of the current state, compared to the agent taking actions out of intuition.
It seems to me that a stochastic policy will try pushing, picking, and bypassing the box many times over the course of large-scale training and thus get this understanding of the expected value. What is the essential difference with your causal policy?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No concerns here
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed questions and reply to them in the following lines.
Q1. Action space and positioning of objects.
A1. Our CaMP agent is trained using exactly the same action space as other baselines. In particular, the agent makes only 90-degree turns when taking *RotateRight* and *RotateLeft*. The turning angles shown in Figure 5 are just for visualization by combining the trajectories of several adjacent steps. In section 5.1, we mention "adjacent nodes" referring to different doors on the path of cross-room navigation. As for the position of objects, all obstacles are spawned on the reachable points of agents, which are distributed on a grid network with a grid size of 0.25m.
Q2. Object dynamics in ProcTHOR.
A2. The interactive attributes of objects are decided according to their object categories (e.g. all tables are unmovable) in the ProcTHOR simulator. We speculate that CaMP policy learns about those attributes by generalizing based on both visual appearance from image and depth sensors and prior experience of interaction.
Q3. Details of invoking and terminating interaction subtasks.
A3. The master policy consists of a GRU and a MLP, and keeps receiving observations when an interaction sub-task is running, yet the output of the master policy will not influence the behavior of the agent. In Figure 3 we use green arrows to denote the call and control procedure of multi-policy and we will modify it to further clarify the transition of control.
Q4. Learning for the intent policy parameters.
A4. We keep the parameters of the intent policy frozen in between syncs since the intent produced by the intent network is just for replicating the decision of the master policy.
Q5. Sim-to-real transfer and application in a real-world setting.
A5. We appreciate the decent concern and agree with the assumptions. It's true that the sim-to-real gap of object interaction supported by ProcTHOR is quite big compared with that of navigation. However, we speculate that the main challenge of applying our method lies in interaction rather than navigation, since object manipulation and interaction dynamics are more complex to learn with limited training data in the real world. Meanwhile, in our hierarchical framework, each call of the nav policy can be regarded as a solo navigation heading toward the target with different starting positions, which reduces the complexity of long-term navigation. Recent development of simulators [1,2] makes the large-scale training for object manipulation with robotic arms possible. Thus we believe it's practical to transfer our method first to manipulation-included, simulated tasks and then to real-world settings to overcome the sim-to-real challenge.
**Q6. Intuition and conceptual value of the intent component**. The reviewer questions about "How does the causal policy architecture "boost" this exploration", "What is the essential difference with your causal policy" compared with a stochastic policy, and "what context the intent is providing about obstacles".
A6. We elaborate our idea of "intent-aware counterfactual exploration" as follows.
* First, counterfactual policy boosts the exploration of new strategies by accumulating new experiences posterior to that from the original policy. During RL training, an experience used for loss calculation is commonly denoted as a tuple $e_t=\left \langle s_t,a_t,r_t,s_{t+1} \right \rangle \in E$, where $r_t,s_{t+1}$ are determined by $s_t$ and $a_t$. Since the counterfactual policy is aware of intent that $s_t^{\prime}=(s_t,i_t)$, experiences from it $E^{\prime}$ include more information than experiences from a standard policy $E$. Moreover, the action distribution follows $P(a_t|i_t)=\pi(s_t,i_t)$, which is posterior to the intent distribution $P(i_t)=\pi(s_t)$. Thus $E^{\prime}$ contains two sets: experimental ones $E_{exp}|\_{a_t=i_t}$ that the original policy "would have collected", and counterfactual ones $E_{ctf}|\_{a_t \neq i_t}$ that are new experiences exploring other strategies.
* Therefore, second, there are three advantages of counterfactual exploration compared with stochastic exploration:1) $s_t^{\prime}$ provides more knowledge about the environment (discussed below). 2) While stochastic experiences are independently distributed, $E_{ctf}$ are collected based on the original strategy, making them more valuable. For example, an agent, who tries to push a box knowing its original intent is to bypass it, may learn that pushing is more effective in this situation since it receives a higher reward. Yet random trials can be hard to correlate and result in inefficient learning. 3) Stochastic experiences are not applicable to on-policy algorithms like PPO since random policy differs too much from the policy we are training. In contrast, counterfactual policy explores new experiences by merely changing the input (add intent) while maintaining the same policy function $\pi$.
* Third, the intent encodes knowledge about obstacles through agent's understanding of the environment. Since the intent shares the same causal parents of action (S $\rightarrow$ I $\leftarrow$ O in Figure 2), the policy network can be regarded as an encoder leveraging its prior experience. Thus intent is a proxy for influencing UC [3].
**References**
[1] Ehsani, Kiana, et al. "Manipulathor: A framework for visual object manipulation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.
[2] Xiang, Fanbo, et al. "Sapien: A simulated part-based interactive environment." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[3] Forney, Andrew, Judea Pearl, and Elias Bareinboim. "Counterfactual data-fusion for online reinforcement learners." International Conference on Machine Learning. PMLR, 2017.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed rebuttal! I've revised my rating and I have no other questions.
---
Reply to Comment 1.1.1:
Comment: Thanks for the appreciation and useful suggestions. We'll revise the final version according to the suggestions and our responses.
---
Rebuttal 2:
Comment: We really appreciate the reviewer for the concerns and suggestions for this work. We hope our response has resolved the confusion and questions in the review. If there are any questions or further comments, please let us know and we will try our best to answer them! | Summary: This paper introduces a causally-inspired hierarchical policy framework for the interactive navigation task in the AI2THOR environment. The framework consists of a master policy, intent policy, and three sub-control policies. The intent policy embeds intuitive intents from the sub-control policies into the master policy, enabling it to make counterfactual decisions. The proposed approach, named CaMP, is evaluated on a newly collected dataset in the ProcTHOR multi-room scenes. The experimental results show that CaMP outperforms the baselines, achieving the best performance on the interactive task.
Strengths: +) The interactive navigation (IN) task is an interesting and challenging task in the field of embodied AI. This paper contributes to the progress in this area by introducing a causally-inspired hierarchical reinforcement learning (HRL) policy. The use of a newer and larger dataset with complex scene layouts and diverse objects enhances the realism of the task. The results demonstrate the effectiveness of the proposed HRL policy, highlighting the advancements in tackling the challenges of interactive navigation.
+) The decomposition of the embodied policy into a master policy and sub-control policies is a reasonable and effective approach for the IN task. This work demonstrates the potential of HRL in tasks where interactions with the environment are crucial for achieving goals.
+) The explanation of confounding bias and the causal diagram provided in Fig. 2(a) help clarify the concept and its relevance to the IN task. However, the discussion of heavy obstacles as an example may need further refinement.
+) The experimental results presented in the paper provide strong evidence for the effectiveness of CaMP. It outperforms several baselines, including PPO, NIE, and HRL, on the IN task using the newly collected large-scale dataset in ProcTHOR multi-room scenes.
Weaknesses: -) The connection between confounding bias resulting from unmeasurable obstacles and the counterfactual policy design is not adequately explained. It remains unclear how the counterfactual policy, through the intent policy, effectively addresses the bias caused by obstacles. The paper lacks a convincing explanation for the direct line from $O$ to $A$ in Fig. 2(b). It is not clear how the weighted-sum of action logits from the sub-control policies can fully capture the policies' intents and uncover the causality depicted in Fig. 2(a) ($O$ -> $A$ and $O$ -> $R$).
-) The model design lacks intuition. Instead of introducing an additional Intent Policy to generate intents, a more straightforward approach would be to recursively use the Master Policy to obtain intents $P^j(i_t)$ and provide feedback to the Master Policy. This recursive feedback mechanism could be extended to multiple levels, denoted by $j$ ∈ $J$, with $P^0(i_t)$ representing a void intent. Additionally, rather than using the action logits from the sub-control policies to represent intents, exploring the utilization of hidden features returned from the GRU in each sub-policy would offer a more intuitive approach.
-) Several implementation details are missing, such as determining which object to interact with when using Push/Pick actions, the force applied during the Push action, and whether the force is correlated to the object's mass. Further clarification on these aspects would greatly enhance the understanding of the proposed approach. More questions regarding missing details can be found in the Question.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please address following points for a more comprehensive understanding of the proposed approach.
Clarifications:
o) Please provide further clarification on the connection between the confounding bias resulting from unmeasurable obstacles and the CaMP model design, specifically the relationship between Sec. 3 and Sec. 4.
o) Clarify why using the weighted-sum of action logits is an effective way to represent intent and its relationship to the confounding bias.
Missing details:
o) How to determine which object to interact with when using the Push/Pick actions?
o) What is the amount of force applied during the Push action, and is it correlated with the object's mass?
o) Provide the exact dimension of the intent embedding $P(i_t)$. Is it equivalent to the size of the total action space?
o) Provide architectural details of the policy, such as whether the CNN is a simple CNN or a CLIP pretrained ResNet, and the number of layers in the GRU.
o) Clarify the meaning of "epochs with a rollout of data" at L191. Does it refer to the number of update iterations using a rollout of data in the PPO implementation in AllenAct?
o) Missing implementation details of the HRL baseline. Does the HRL apply all outputs from the sub-control policies in a single execution, or does it follow a traditional HRL policy where the master policy calls one of the sub-control policies at each step?
Confusions:
o) Clarify whether $r_{tp}$ at L227 is the same as $r_{sp}$ at L229.
o) At L235, it is mentioned that $r_{nav}$ is used to learn the push and pick sub-control policies. This appears to contradict the context provided at L235. Please clarify the requirements for task success, specifically if the agent needs to reach the target position.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: -) While the proposed CaMP is interesting and evaluated in a complex environment with non-trivial tasks, it would be beneficial to see more results in a simpler environment. For example, evaluating the model on a 2D environment like PettingZoo, where a dot agent performs the interactive navigation task by interacting with obstacles, could provide insights into how the model disentangles intertwined factors by the causally-inspired design in a more controlled setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive criticism. We address the concerns in detail in the following lines.
Q1. Model design explanation. The reviewer wonders “the connection between confounding bias resulting from unmeasurable obstacles and the counterfactual policy design”, “how the weighted-sum of action logits from the sub-control policies can fully capture the policies' intents and uncover the causality depicted in Fig. 2(a)”, and argues that “a more straightforward approach would be to recursively use the Master Policy to obtain intents and provide feedback to the Master Policy”.
A1. We appreciate the questions and further clarification is detailed as follows.
First, counterfactual decision-making theoretically addresses the problem of learning sub-optimal policy caused by confounding bias from unmeasurable obstacles. Obstacles in InterNav scenario can be regarded as an unobserved confounder (UC) since they influence both the decision-making of actions and the generation of rewards. For instance, the agent may decide to take object interactions when encountering an obstacle (O $\rightarrow$ A). Here, obstacles (O) serve as the mediator from state to action (S $\rightarrow$ A). The causality from action to reward (A $\rightarrow$ R) is confounded by UC, leading to poor estimation of value (accumulated reward) which likely results in sub-optimal policy in RL training. In the theorem of causality inference, unlike observable confounders, UCs can hardly be addressed directly with methods like *Intervention* or *Back-door adjustment*. However, it can be theoretically proved that a counterfactual policy considering intent obtains more value than a standard policy when there exists UC [1].
In Section 4, we apply counterfactual policy to a hierarchical decision framework. In addition to addressing UC, learning counterfactual policy also addresses the indirect feedback problem of master policy by providing it with information about the low-level decision-making through integrated intent.
Second, we believe the sum of actions from sub-control policies weighted by master policy’s decision can fully represent agent’s hierarchical intent, since it contains agent’s intent on each atomic action and the full distributional information of four policies. Since the primary definition of intent is “action before execution” that $I=i_t=f_i(s_t,o_t)$, we don’t see the rationale of implementing intent with hidden features from GRU instead of action logits.
Third, we find the idea of extending the recursive feedback to multiple levels interesting since it may explore the effect of "recursive intent", namely the intent generated based on a priori intent. And we are training new models based on the baseline of PPO to study how a "recursive intent" may help policy learning and will report the results in our final paper (given the limited time for model training). Nonetheless, we believe implementing agent’s intent with an intent policy is reasonable and it’s flexible for us to utilize old intent from iterations behind to balance the policy exploration.
Q2. Missing details and confusions.
A2. Thanks for pointing them out and we will reply to them item by item as follows.
(1) **Q**: "How to determine which object to interact with when using the Push/Pick actions?"
**A**: When taking Push/Pick actions, the object to interact with would be the closest pushable/pickable (predefined according to category) and observable (within 1.25m) object.
(2) **Q**: "What is the amount of force applied during the Push action, and is it correlated with the object's mass?"
**A**: The amount of force applied on the object during the Push action is 100 Newton.
(3) **Q**: "Provide the exact dimension of the intent embedding $P(i_t)$. Is it equivalent to the size of the total action space?"
**A**: The dimension of intent embedding is 12, equivalent to the size of our action space.
4) **Q**: "Provide architectural details of the policy, such as whether the CNN is a simple CNN or a CLIP pretrained ResNet, and the number of layers in the GRU."
**A**: The CNN is implemented as a simple CNN and the number of GRU layers is 1, which is in line with prior work [2] and the default setting of AllenAct.
5) **Q**: "Clarify the meaning of "epochs with a rollout of data" at L191. Does it refer to the number of update iterations using a rollout of data in the PPO implementation in AllenAct?"
**A**: “Epochs with a rollout of data” refers to the number of update iterations using a rollout of data in PPO.
6) **Q**: "Missing implementation details of the HRL baseline. Does the HRL apply all outputs from the sub-control policies in a single execution, or does it follow a traditional HRL policy where the master policy calls one of the sub-control policies at each step?"
**A**: In the HRL baseline, the master policy calls one of the sub-policies (with the same splits of action space of our model) at each step and the action is output by the sub-policy.
7) **Q**: "Clarify whether $r_{tp}$ at L227 is the same as $r_{sp}$ at L229."
**A**: It’s a typo and both should be step penalty $r_{sp}=0.01$.
8) **Q**: "At L235, it is mentioned that $r_{nav}$ is used to learn the push and pick sub-control policies. Please clarify the requirements for task success, specifically if the agent needs to reach the target position."
**A**: The reward for interactive auxiliary tasks should be $r_{inter}=r+r_{as}-r{af}$, where $r=r_{success}+\Delta_{dis}-r_{sp}$ and $r_{success}$ is obtained when the goal of interactive task is achieved (taking Done when the obstacle is cleared).
**References**
[1] Zhang, Junzhe, and Elias Bareinboim. Markov decision processes with unobserved confounders: A causal approach. Technical report, Technical Report R-23, Purdue AI Lab, 2016.
[2] Zeng, Kuo-Hao, et al. "Pushing it out of the way: Interactive visual navigation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
---
Rebuttal 2:
Comment: We really appreciate the reviewer for the concerns and criticisms for this work, and hope our response has resolved the confusion and questions in the review. If there are any questions or further comments, please let us know and we will try our best to answer them!
---
Rebuttal Comment 2.1:
Title: Thank the authors for detailed responses.
Comment: I appreciate the authors for their thorough response in addressing my concerns, particularly in clarifying missing details and points of confusion. I do not have further questions regarding to those aspects.
I would like to also thank the authors for their detailed responses regarding my questions about "the connection between confounding bias resulting from unmeasurable obstacles and the counterfactual policy design" as well as "uncover the causality depicted in Fig. 2(a)".
However, I still find it challenging to be convinced by the statement suggesting that "the sum of actions from sub-control policies weighted by the master policy's decision can fully represent the agent's hierarchical intent." While I understand that action logits can, to a certain extent, capture the agent's intent, I remain concerned that these logits, being the end result of sub-policies, might disregard crucial information about the environment and the agent's comprehension and belief regarding its state. It's possible that this approach might only capture the agent's intentions at isolated time steps and overlook the memory accumulated along the trajectory the agent has traversed.
For example, in scenarios involving an obstacle in front of the agent, while the Push-sub-policy aims to push the obstacle, the Navigate-sub-policy might suggests a detour due to its memory of an alternative route. Given that the framework constructs intent solely through weighted-sum of action logits, it may not encapsulate the holistic context such as visited states or environment configuration. Since each sub-policy employs a GRU (as described at L169), it would be insightful to consider a baseline where intent is constructed by summing the weighted GRU memories from sub-policies, rather than just action logits. This could offer valuable insights for future HRL model design for embodied agents.
Finally, I want to thank authors for preparing the model with recursive intent. Looking forward to seeing the results! I'm open to raising my score if strong arguments or new findings suggest that the remaining concerns are less impactful than they seem.
---
Reply to Comment 2.1.1:
Comment: We thank the reviewer for the further concern. Regarding the intuition of model design, one of our purposes of applying the integrated intent is to provide the high-level policy with information of low-level decision-making (L188). We agree that the integrated intent may somewhat overlook the low-level comprehension of agent's state, although the agent can access the state information through the high-level GRU memories. As the reviewer suggested, we modify the CaMP to build a new baseline (denoted as CaMP-mem) where the master policy takes in the sum of memories from sub-policy GRUs weighted by the intent of the master policy:
$P(i_t)=F(\sum_{\omega^{j} \in \Omega}h_{t}^{\omega^{j}}\cdot P(\omega^{\prime}_{t}=\omega^{j}))$
where $h_{t}^{\omega^{j}}$ denotes the memory of sub-policy $\omega^{j}$ corresponding to equation (7). We embed the memories with a linear model $F(\cdot)$ to match the size of integrated intent, so that we can train the new model by fine-tuning CaMP with its parameters (given the limited time). We train the CaMP-mem model with 2 million steps on the validation set and report the testing results in the table below (compared with CaMP and its variant without intent):
| | | All | | | | Hard(N$\geq$4) | |
|:---:|:---:|:---:|:---:|---|:---:|:---:|:---:|
| Methods | SR(%) | SPL | FDT | | SR(%) | SPL | FDT |
| CaMP wo/intent | 50.8 | 0.294 | 4.03 | | 31.0 | 0.151 | 6.79 |
| CaMP | 56.3 | 0.327 | 3.67 | | 41.4 | 0.231 | 5.76 |
| CaMP-mem | 51.2 | 0.298 | 3.96 | | 32.9 | 0.168 | 6.78 |
The results illustrate that constructing the intent of action logits obtains better performance than constructing the intent with low-level memories. We speculate the reason is that the knowledge increment from sub-policy memories compared with the master policy memories is not significant enough. However, CaMP-mem is slightly better than CaMP wo/intent, showing the memories from sub-policies are still valuable for high-level decision-making.
We still thank the reviewer for the useful comments. Moreover, about the comparison in the above table, we think a counterfactual policy should make decisions based on the intent (unexecuted action) rather than the intermediate variables (e.g. hidden features, memories) since the intent is the only "blind spot" of the decision-making process. For instance, it's true that the GRU memories may "encapsulate the holistic context such as visited states or environment configuration". However, even if the agent's memories are not provided through intent, the exactly same memories will be generated by GRU later and help the agent understand the states (regardless of HRL). And the GRU is trained for the purpose of utilizing memories to better encode the states. On the contrary, since the action logits are the end results of the policy, the comprehension of "last mile" decision-making is invisible for the agent, which makes the intent particularly valuable. | Summary: This paper tackles the problem of interactive visual navigation; i.e., an agent navigating in an enviornment where it is allowed to affect the configuration of the environment (e.g., by moving objects around or picking them up), to improve navigation performance. The key idea is to learn a hierarchical policy that factors in agent intent to propose an action that either results in navigation towards the goal, or interaction with an obstacle. A dataset based on the PROC-THOR simulation environment is also introduced, to facilitate evaluation.
Strengths: [S1] The proposed method is sound; and for the most part, well-defined. The problem formulation is easy to follow, and the description of the method is clear.
[S2] Interactive navigation is a challenging and relevant robotics/embodied-AI problem to the Neurips community. While a large fraction of existing approaches focues on non-interactive visual navigation, this paper explicitly considers affecting state changes by manipulating obstacles, resulting in a novel problem setting where there isn't much prior work.
[S3] I find the positioning of this paper w.r.t. existing literature fair. The baseline methods considered for evaluation are representative of the various flavors of non-interactive visual navigation approaches that have, over the years, been proposed for PointGoal navigation.
[S4] The paper discusses enough implementation details that a reasonable practitioner may be able to replicate the key aspects of the model architecture and the state, action, reward structures.
Weaknesses: Meta-comment: I have one major concern with the experiment design and evaluation setup; which unfortunately results in the key claims of the paper not being substantiated. I have tried to elaborate the issue, and also the rationale behind it. I also suggest a few mitigation strategies (note: these aren't the only possible ones; other strategies welcome too). While I am unsure whether these may be addressed in the short author response window, if these are adequately addressed, I would have no reservations in bumping my score up.
[W1] **[Major] Evaluation metrics and experiment design**:
Interactive navigation is very tricky to evaluate. In (non-interactive) navigation scenarios, the path length metrics used in literature are often highly correlated to execution time. Assuming that each atomic action executed by the agent takes nearly the same time to execute, a longer path length would mean that the agent takes longer to reach the goal. However, in the interactive navigation scenario, typical path length metrics like SPL are no longer good indicators of the amount of time it would take to complete the task. This is because, the time spent in picking up or moving objects will also count towards the overall "time-to-reach-goal-state"; and picking up or moving objects is heavily dependent on object states, which voids the assumption that each action takes roughly the same amout of time.
I understand the rationale behind using typical metrics like SR (success rate), SPL (success-weighted inverse path length), and FDT (final distance to target) -- these metrics allow for easy benchmarking with existing approaches (esp. non-interactive navigation techniques); these are also easily accessible via modern simulators that support PointGoal navigation.
That said, reporting only these above metrics only portrays the benefits of an interactive navigation strategy, while masking away the disadvantages. Execution time is often lost when interacting with objects, which goes unaccounted for. This becomes apparent when you consider the following counterexample: assume that an agent spends more than half its time moving (or picking up) every object encountered in its way; but ends up taking the shortest possible (i.e., optimal) path to goal. It will then end up with a success rate and SPL of 1 and an FDT value of 0. However, in reality, an agent that takes a twice as long path to the goal (but does not interact with any object) will end up reaching the goal at about the same time, and will have the same SPL, SR, and FDT values (recall that, in this hypothetical scenario, the interactive agent spends half its time interacting with obstacles in its path). This calls for a different approach to evaluate interactive navigation approaches; ideally the SPL metric should also account for time lost due to interaction. Measuring this time could be challenging (or impossible) to do in the first place, so here are a few potential mitigation strategies to consider.
* While episode lengths are fixed, to ensure the agent does not take an indefinite amount of time, to compare fairly against non-interactive agents, each episode length must be capped to the number of timesteps in which a non-interactive agent completes the task. This will ensure that a fair comparison is possible, when a reasonably accurate time estimate is available for each interaction.
* An alternative would be to consider evaluating solely in scenarios where it is impossible to reach the goal without interacting with objects. (more nuance on this follows in [W2] below).
[W2] **Eval on data subsets**: I would have liked to seen more granularity in terms of the quantitative results presented in Table 3. It would, for instance, be useful to split the dataset into a variety of categories, depending on whether or not interaction is required to solve an episode, and the level of difficulty of the episode. This becomes more important, because the dataset is claimed as a contribution. Table 2 goes a bit along this direction, indicating the number of obstacles per room; however, it is not clear how many of these episodes absolutely require interactive navigation (i.e., cannot be solved by a non-interactive optimal agent). (The dataset construction seems to ensure a 50% likelihood that a path is unavailable -- I would argue in favor of forming two splits of the dataset, one where no non-interactive agent trajectory exists; and one where an optimal non-interactive agent will need to take a longer path).
[W3] **[Minor]** The paper, in its current form, falls short in terms of technical rigor when discussing aspects of causal inference. E.g., lines 134-135 "However, without considering the causalities from obstacles, it’s hard to generalize to unseen environments or large-scale datasets."; line 38 "... , learning causality through RL training is challenging due to the existence of unobserved..."; line 137 "The above confounding bias can be tackled...". Neither the "causalities stemming from obstacles" nor the "confounding bias" have been clearly been defined in the paper. It is also important to clearly define the type of causal relationships that are being learned by design (perhaps section 4.2 is a good place to do so).
Revising these aspects of the paper will make the descriptions of the approach more accurate.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, I like where this paper is going. The hierarchical policy presented herein is sound. My concerns are not with the method; rather with the choice of evaluation metrics. In my review (see weaknesses above), I have attempted to also provide a few mitigation strategies (these are by no means extensive -- merely initial thoughts on how newer evaluation metrics may be designed for the task at hand).
I would like to see this discussed further in the author response phase. If addressed adequately, I have no hesitation in bumping my score up.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive concerns and address them in detail in the following lines.
Q1. Evaluation metrics and experiment design.
The reviewer is concerned about the evaluation of InterNav that “in the interactive navigation scenario, typical path length metrics like SPL are no longer good indicators of the amount of time it would take to complete the task.” and “That said, reporting only these above metrics only portrays the benefits of an interactive navigation strategy, while masking away the disadvantages.” and provides a few potential mitigation strategies of evaluation.
A1. We agree that “time of completing the task” is also an important evaluation indicator for InterNav. In our scenario, by calculating the standard time cost of each action (including movement and manipulation), a time cost metric can also be evaluated in our experiments, which can be regarded as a time-measurement variant of the SPL metric. Also, considering that most previous InterNav works [1,2] report the SPL, in order to compare with those works directly, we’ll report both time and path length metrics in the following response and final version.
First, we evaluate an additional metric STS (short for Success rate weighted by Time Steps) to measure the time cost of task completion: $STS=\frac{1}{N}\sum_{n=1}^{N}Suc_{n}\frac{L_{n}/grid}{TS_{n}}$, where $L_{n}$ is the shortest path length, $TS_{n}$ is the timesteps agent takes to complete the task, and $grid=0.25m$ is the unit distance of agent moving forward in one step. Thus $L_{n}/grid$ represents the number of timesteps it takes to navigate to the goal by merely moving forward (without spawned obstacles). Since the agent takes atomic actions in AI2-THOR simulator and each of them shares the same amount of time to execute (a timestep), we measure the time cost with the number of timesteps. STS is higher when the agent accomplishes the task with less time and the ideal situation is that the goal is directly ahead and there is no need for interaction where STS is 1 (it's the most ideal situation for all navigation scenarios so that it's computable for InterNav). As a matter of fact, we train our model following that idea, since our reward shaping $r=r_{success}+\Delta_{dis}-r_{tp}$ encourages interaction that efficiently reduces the goal distance with fewer timesteps, rather than pursuing a shorter trajectory. Thus, both strategies of efficient bypass and effective interaction are rewarded. We report the performance of several models in the table below, and the result shows that STS is a more stringent measure and is able to reflex the performance difference between models.
|Methods|STS (all)|STS (N$\geq$4)|
|:----:|:----:|:----:|
|PPO|0.134|0.086|
|NIE|0.155|0.102|
|HER|0.135|0.084|
|PPO+intent|0.163|0.109|
|CaMP|**0.177**|**0.121**|
Second, in InterNav scenarios, SPL is an indicator of agents' ability of object interaction, which is crucial especially in cluttered environments. Fei Xia et al. [2] have introduced SPL to interactive navigation as the measurement of path efficiency that “Path Efficiency: how efficient the path taken by the agent is to achieve its goal. The most efficient path is the shortest path assuming no interactable obstacles are in the way.” Although the most efficient path may not correspond to the least time cost, it indicates the most effective interaction. For InterNav in complex multi-room environments, we aim at improving agent's interactive ability so it can proactively change the environment for better navigation, rather than limiting its strategic choices to its capacity.
Q2. Eval on data subsets. The reviewer suggests that "I would argue in favor of forming two splits of the dataset, one where no non-interactive agent trajectory exists; and one where an optimal non-interactive agent will need to take a longer path".
A2. We appreciate the suggestion and form a split of dataset where non-interactive trajectories (longer than the shortest path) exist to enrich the evaluation. We first calculate the ratio of that split in the whole dataset: 20.5% (overall), 27.4% (1 $\sim$ 2 rooms), 18.6% (3 $\sim$ 5 rooms), 12.4% (6 $\sim$ 10 rooms). Then we report the performance of models (a non-interactive PPO trained on ProcTHOR is included) on the **new split** in the table below. It's interesting to find that PPO without interaction achieves better STS (0.201) compared with PPO (0.181), although it obtains lower SR on the non-interactive set (47.4%) and the whole set (21.5%). It indicates that the strategy of object interaction may cost unnecessary time in uncrowded environments and the agent needs to balance the efficiency and efficacy during the task.
|Methods|SR (%)|SPL|STS|FDT|
|:----:|:----:|:----:|:----:|:----:|
|PPO (non-inter)|47.4|0.309|0.201|4.62|
|PPO|51.5|0.306|0.181|3.44|
|NIE|58.8|0.345|0.188|3.01|
|HER|51.9|0.316|0.176|3.40|
|PPO+intent|70.4|0.390|0.222|2.38|
|CaMP|**72.3**|**0.407**|**0.236**|**2.05**|
Q3. Concept clarification.
A3. Thanks for the concern. By "causalities from obstacles" we refer to the causal relationships from the obstacle (O) to other causal factors (i.e. O $\rightarrow$ A, O $\rightarrow$ R in Figure 2). By "confounding bias" we refer to the negative phenomenon of $P(R|do(A))\neq P(R|A)$ caused by the confounder (i.e. obstacles in InterNav). And our method is designed to better learn the causality from action to reward (A $\rightarrow$ R). We will revise the concept clarification of our paper for better understanding.
**References**
[1] Zeng, Kuo-Hao, et al. "Pushing it out of the way: Interactive visual navigation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[2] Xia, Fei, et al. "Interactive gibson benchmark: A benchmark for interactive navigation in cluttered environments." IEEE Robotics and Automation Letters 5.2 (2020): 713-720.
---
Rebuttal 2:
Comment: Thanks to the reviewer for the concerns and suggestions for this work. We hope our response has resolved the confusion and questions in the review. If there are any questions or further comments, please let us know and we will try our best to answer them!
---
Rebuttal Comment 2.1:
Title: Author response effectively addresses all major concerns
Comment: Hi authors,
Many thanks for all your (substantial) efforts in preparing the author response. This addresses all of the major concerns I had on the initial manuscript.
I have read through the other reviews and responses therein. My concerns were not so much about the soundness of the work (as I stated in my initial review, the method was sound); it was only with the design of the experiments and evaluation metrics. The STS metrics and the analyses on the new split address both these concerns. As such, I have no hesitation in further improving my score.
The somewhat contrary trends across STS and SR on the new split indicate that there is no one metric yet that may be used as a standalone indicator of performance, and that we may need both STS and SR to be high, when determining which InterNav approach is better. This is worth clarifying in a revised manuscript.
Again, I really appreciate this response; and like where this work is heading.
---
Reply to Comment 2.1.1:
Comment: Thanks for the appreciation and useful suggestions. We'll revise the final version according to the suggestions and our responses. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
TOA: Task-oriented Active VQA | Accept (poster) | Summary: This paper proposes task-oriented active VQA (TOA), which uses LLM as an implicit knowledge source and answers the question through a sequential hypothesis-verification process. This method can more accurately attend to the essential information in the images and reduce the introduction of irrelevant information. And they develop a multi-round dialogue approach to solve the problem progressively and decide the next step dynamically, which has a clear answering process and better tolerance of mistakes in previous steps. The experiments show the method outperforms the baselines and presents clear reasoning procedures.
Strengths: 1. The idea of this paper is very novel, and the key visual content is obtained through multiple rounds of dialogue.
2. The method takes full advantage of the rich knowledge and application flexibility of LLM.
3. The process design of reasoning-hypothesis-verification can reflect clear reasoning procedures and has better interpretability.
4. Experiments show that the performance of the method is very good, and verify the validity of the design idea.
Weaknesses: 1. Most of the sub-modules are somehow similar to existing models, and the design of the reasoning-verification process may be ad hoc and has great limitations.
2. In the experiment, other methods use GPT-3 and this method uses ChatGPT, which is a bit unfair.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Will this method be limited by the high cost of ChatGPT? What is the impact on the generation effect if other open-source LLMs are used?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors say that subjective example design brings uncertainty to the reasoning process, and evaluation methods for open-ended question answering are incomplete. I think it may be necessary to introduce some human evaluation as support in both aspects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript, for providing such valuable feedback and your appreciation of our novelty regarding the knowledge-based VQA and LLM, the better interpretability of our approach, and our experimental results! Here are our response for each of your questions.
**[Reviewer 8uLs Weakness-1: Choices of vision models and paradigm design]**: We appreciate the reviewer's concern. First, we want to emphasize that: our core contribution lies in creating a novel computational paradigm where the LLM can iteratively and sequentially combine question and image information, imitating human cognitive processes and using its knowledge for reasoning. Although we utilize existing sub-modules, it is our unique integration and the underlying philosophy that sets our approach apart. This new approach allows the LLM to actively form and verify hypotheses by collecting visual information, representing a significant innovation in the field.
- **Regarding using existing vision models in our vision executor**: We utilized common vision models (e.g., vision models used in Visual Programming) to ensure a **fair and meaningful comparison** with existing methods. This is because only by doing so can we readily show that the improvement brought by our method is not because we are using different or more advanced vision models, but from the novelty of our proposed task-oriented active VQA approach for Knowledge-based VQA. This can be seen from the comparison between our proposed method and the Visual Programming in our main paper Table 3. Moreover, our flexible prompting design can allow the vision modules to be easily added or changed for different task requirements in our proposed method.
- **Regarding "the design of the reasoning-verification process"**: Our hypothesis-verification reasoning process is inspired by the human cognitive process, where assumptions and inferences are made based on their knowledge and then verified actively. Translating this into a computational form is non-trivial, and our work is the first to tackle this challenging problem. We acknowledge that our implementation can be enhanced, but we believe our current work has effectively demonstrated the potential and feasibility of this new paradigm.
**[Reviewer 8uLs Weakness-2: Using GPT-3 for our approach]**: Thank you for highlighting this concern. We observe that both GPT-3 and ChatGPT possess their unique strengths, and it's not a matter of one definitively being better than the other. GPT-3 tends to have stronger imitation abilities through in-context learning, whereas ChatGPT may have better reasoning capabilities and creativity, especially in multi-round dialogues. However, this might also make maintaining format consistency across multiple dialog rounds more challenging with ChatGPT.
Our method involves using a multi-round dialogue format, which initially made ChatGPT a more convenient choice for us. To address your concern, we conducted additional experiments by switching from ChatGPT to GPT-3, the same model used in our compared baselines. Interestingly, the results showed that our method performed even better with GPT-3 than with ChatGPT (improved from **60.6%** to **61.5%**). This more even-handed comparison further accentuates the advantages of our proposed method over compared methods. Our conjecture for this interesting outcome is that GPT-3 already has sufficient reasoning ability and knowledge for current knowledge-based VQA tasks, and it may better imitate the in-context examples and follow the instructions.
**[Reviewer 8uLs Question-1: Usage of other LLMs]**: Thank you for this question. We would like to point out that all the methods we have compared, based on LLMs, are dependent on GPT. Thus, in general, all related methods would be somewhat constrained by the cost of reasoning with GPT. Compared to existing methods such as Visual Programming, our method has significantly reduced the average reasoning cost for a single question from approximately 0.049 to 0.0085 dollars.
Regarding open-source LLMs, the current open-source models still have some gaps in reasoning ability compared to the GPT series, which may somewhat impact the final answer's effectiveness. We conducted experiments with LLaMa and the free-to-use Claude-2 model by Anthropic. The accuracy of OKVQA using LLaMa is **51.8%**, which is still at an acceptable level. We believe that as open-source LLMs continue to advance, this issue will gradually be resolved. Our experiments on Claude-2 yielded an accuracy of **60.1%**, which is comparable to GPT, further demonstrating the applicability of our framework to different LLMs.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: My major concerns have been addressed. I will keep my score. | Summary: This paper addresses knowledge-based visual question answering by leveraging a large language model (LLM) as a knowledge source. To mitigate the limited capability of vision models, the paper suggests letting the LLM predict hypothesis and actively gather visual evidence. Experimental results show that the proposed method outperforms baseline approaches in open-ended knowledge-based VQA.
Strengths: - The paper is well-written, clearly structured, and easy to follow.
- The motivation behind using an LLM for VQA to assist imperfect vision models is reasonable, and qualitative results demonstrate that the method behaves as intended.
- Ablation studies reveal that the multi-round dialogue strategy and hypothesis-verification process positively contribute to the approach.
- The experiments are thorough, offering insights into the impact of design choices.
Weaknesses: - The method's reliance on heuristic prompting may limit its generalizability to other LLMs.
- The validity of post-processing, which paraphrases predictions using a predefined vocabulary, is unclear. A small analysis of the post-processing would make the paper more convincing. For instance, providing examples of raw predictions and post-processed ones would help readers better understand the post-processing.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - About the design choice of heuristic prompting, investigating the effects of different prompts would be interesting.
- It would be helpful to see in-context examples for specific image-question pairs.
- A table reference error (Table 4.1) should be corrected.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper discusses two limitations.
- Evaluating open-ended VQA remains an unresolved challenge.
- The design of prompting examples may not be optimal.
They are important issues in this area.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our writing, motivation, and experimental results! We have considered all your suggestions and have made the necessary revisions or clarifications to improve the quality of our manuscript as follows. We have attached a one-page pdf in our global response to demonstrate our prompting instruction and in-context learning examples.
**[Reviewer TKN8 Weakness-1: Prompting strategy]**: We would like to apologize if naming Section 3 as 'heuristic prompting' in our paper gave the reviewer the impression that the design of the instruction prompting template was heuristic or without basis. Actually, it is designed to guide LLM reasoning following a thoughtful and deliberate process that includes several key elements:
1. **Defining the Task**: We begin by defining the task to inform the LLM about the specific reasoning task at hand.
2. **Introducing Available Vision Tools**: We then present the available vision tools within the prompt, enabling the LLM to recognize the executable functions to verify the hypotheses.
3. **Specifying Output Format Constraints**: By delineating the expected output format, we ensure consistent parsing of the LLM's output to automatically query our designed vision functions.
You can also check the one-page pdf in global response for more concrete examples of our prompting strategy.
**Ablation study on the design choice of our prompting strategy.** To demonstrate the effectiveness of our design principles, we have added a new ablation study. We removed the task description, vision tools description and hypothesis guidance in the prompt instruction and in-context examples, respectively. The results are shown in the table below:
| Model | Accuracy (%) |
|------------------------------|--------------|
| TOA-full | 60.6 |
| w/o task description | 57.2 |
| w/o vision tools description | 58.6 |
| w/o hypothesis guidance | 55.1 |
The results show that these concise yet essential design principles have proven to be effective to guide the LLM for knowledge-based visual reasoning. However, we also recognize that our design might not be optimal. Further refinement in prompt design might yield improvements, which will be explored in our future work.
**Applicability to other LLMs.** Last but not least, we change the GPT to the free-to-use Claude-2 model by the Anthropic and conducted the same experiment on OKVQA, where we can achieve comparable accuracy **60.1%** to GPT. This further demonstrating the applicability of our framework to different LLMs.
**[Reviewer TKN8 Weakness-2: Analysis of the evaluation process]**: Thank you very much for the suggestion. Here we provide some examples of raw predictions and mapping to the answer vocabulary:
(1) Prediction: to advertise; Ground Truth: advertising
(2) Prediction: to tell directions; Ground Truth: navigation
(3) Prediction: one-way traffic; Ground Truth: one way
(4) Prediction: bike; Ground Truth: bicycle
From the above examples, we can see that these matchings are reasonable and semantically highly relevant.
Moreover, to analyze our evaluation strategy more comprehensively, we have further conducted two different types of evaluation:
- **Human Evaluation**: We conducted a human evaluation on the predictions of OKVQA dataset by our proposed method, and the accuracy is **65.0%**, which is even higher than the result of 60.6% obtained by our original evaluation strategy in Table 1 of our main paper.
- **Leverage GPT for evaluation**: After we obtain the predicted results, we separately call the OpenAI API to initialize a new GPT to judge whether the predicted answer and the ground truth can be considered consistent, given the original question. The resulting accuracy is **64.7%** for the prediction from our proposed method, which is similar to the human evaluation.
These results further demonstrate that although our initial attempt on open-ended answer evaluation may not be completely perfect, it **did not over-claim** the performance of our proposed method. Thus we can affirm the effectiveness of our method. Moreover, this analysis further validates the necessity of addressing the evaluation of open-ended question answering.
**[Reviewer TKN8 Question-1]**: Please refer to our reply to **[Reviewer TKN8 Weakness-1]** for the **Ablation study on the design choice of our prompting strategy**.
**[Reviewer TKN8 Question-2: Demonstration of in-context examples]**: Please refer to the one-page pdf we attached in our global response for the demonstration of the in-context examples.
**[Reviewer TKN8 Question-3: Typo]**: Thank you for pointing this typo out; we have revised it accordingly!
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thank you for your responses.
- The explanation has helped clarify the intuition behind designing the prompt, and that the ablations effectively validate the design.
- I appreciate the additional report on the results with Claude-2. The report suggests the generalizability of the proposed method.
- The examples of post-processing outputs were able to address my question. I believe that the technique is reasonable.
The responses clarify the contributions of this work, thus I remain positive on this paper. | Summary: Early methods for Knowledge-based visual question answering (VQA) explicitly retrieve knowledge from external knowledge bases, often introducing noisy information. Current large language models like GPT-3 as implicit knowledge sources cannot effectively understand image inputs. Thus, extracting the image information and inputting it into large language models remains an open problem. Using image captioning and object descriptions to represent the image may either drop the essential visual information to answer the question correctly or involve irrelevant objects to the task of interest. To address this problem, the authors propose to let large language models make an initial hypothesis according to their knowledge, then actively collect the visual evidence required to verify the hypothesis. In this way, the model can attend to the essential visual information in a task-oriented manner. The authors leverage several vision modules from the perspectives of spatial attention (i.e., Where to look) and attribute attention (i.e.,16 What to look), which is like human cognition.
Strengths: - The idea to exploit ChatGPT to solve knowledge-based VQA interactively seems novel.
- Compared to the recent similar method, Visual Programming, the proposed method shows favorable performance, and the authors provided an ablation study and hyper-parameter analysis to verify the characteristics of each component of the proposed method.
Weaknesses: - It would be more helpful to add an experiment to verify whether the proposed method can be generally applied to other tasks. For example, can the proposed ChatGPT-based interactive framework also help improve the performance of other datasets, such as the VQAv2 dataset?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to the questions in the weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I cannot find a potential negative societal impact in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for your insightful comments, especially your appreciation of the novelty of our proposed approach and our experimental results. For the concern you expressed in the Weakness section, we have addressed it in our **Global Response-2**. We paste the response here for your convenience.
**[Global Response-2: About experiments on more datasets]** We greatly value your suggestions regarding the expansion of our experiments to more datasets and different vision-language tasks. We acknowledge the importance of demonstrating generalizability, but here's why we have opted for a specific focus on Knowledge-based VQA:
- **Our specific focus: Knowledge-based VQA**: First, as explicitly demonstrated in our abstract and introduction, our method targets exploiting LLM for the Knowledge-based VQA task. This is because Knowledge-based VQA requires open-world knowledge to answer the question about an image and is more challenging than traditional VQA tasks [3] since the model needs to extract relevant external knowledge and then perform joint reasoning on the question, image, and the knowledge. Such a challenging requirement essentially demands the LLM to be actively engaging in collecting the essential information from both the question and image, and motivates us to propose our task-oriented active VQA method for Knowledge-based VQA in this paper. Extending our method to more general vision-language tasks is not our current priority in this paper, because we believe that it is more beneficial and important for us to comprehensively demonstrate the effectiveness of our proposed method on the challenging Knowledge-based VQA.
- **Existing datasets for Knowledge-based VQA**: For the task of knowledge-based VQA, OKVQA and A-OKVQA datasets are the standard benchmarks in the existing literature [9, 51, 52, 4, 5, 6, 12, 11, 13]. The comparison of these two datasets with state-of-the-art methods in Table 1 and 2 in our main paper sufficiently demonstrates the efficacy of our proposed method for the knowledge-based VQA.
- **More results on other VQA task**: As complementary, we have conducted additional experiments on the VQAv2 dataset to demonstrate the broader applicability of our method. We achieved **74.8%** (**16 shots**) on a randomly selected subset of VQAv2 val set, which is a competitive few-shot result, compared with the powerful vision-language model Flamingo[a] (**68.4%, 32 shots**). We consider these exploratory results as a stepping stone for future research.
[a] Flamingo: a visual language model for few-shot learning. NeurIPS 2022.
---
Rebuttal 2:
Comment: The authors' response answered my question.
Therefore, I will keep my score for this paper. | Summary: This paper tries to solve the knowledge-based visual question answering (VQA) by proposing a new approach that utilizes LLMs for calling visual modules in a task-oriented manner. The method employs a reasoning-hypothesis-verification process in multiple rounds to progressively find the answer. Evaluations are conducted on OK-VQA and A-OKVQA to demonstrate the effectiveness of this method.
Strengths: 1. Multi-round interactions and reasoning-hypothesis prompting are introduced in this paper. Compared with previous one-time program generation, those two improvements are reasonable intuitively and can benefit on reported benchmarks.
Weaknesses: 1. In the title, abstract and introduction, `task-oriented` seems to be emphasized as the main advantage of this paper over others. However, ViperGPT and Visual Programming are also task-oriented. The main difference between this paper and the previous two are rationale and multi-rounds. The corresponding part should be rewritten.
2. I don't understand the necessity of the hypothesis. From the given examples, it seems without an assumed hypothesis, the LLM should still be able to call the right verification function. In Fig3's examples, imagine you remove the hypothesis, the workflow of LLM seems still smooth and reasonable, and many hypotheses are actually None. In Tab3's ablation, both reasoning and hypothesis are removed which is not indicative enough. Can you do an ablation to remove the hypothesis while preserving reasoning and verification and answer?
3. Why don't you compare with ViperGPT in OK-VQA? Is that because ViperGPT is in a zero-shot manner, while yours is in a few-shot manner? Then why your method must be reported in a few-shot manner and what performance it can achieve in the same manner as ViperGPT?
4. Only two benchmarks are compared. In ViperGPT/Visual Programming, 4/3 datasets are benchmarked. To demonstrate the generalization ability of the proposed method, at least one or two more typical datasets are needed.
5. Similar ideas of multi-rounds and reasoning in each step have been used in [1]. Please compare the difference.
6. Many visual models are employed in this paper however the technical details as well as related prompts are not included in the paper. It makes reproduction difficult.
Ref:
[1] See, Think, Confirm: Interactive Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors didn't discuss limitations in the paper. However, it might happen that the LLM (used in paper) generates some toxic or biased response.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable and constructive comments. We have made the necessary revisions or clarifications to improve the quality of our manuscript as follows. We have attached a one-page pdf in our global response to demonstrate our prompting instruction and in-context learning examples. Here are our responses to each of your questions.
**[Reviewer BgCz Weakness-1: The term of `task-oriented`]**: Thank you for your suggestions! We want to clarify that the term `task-oriented` refers to a dynamic, iterative process where the Large Language Model (LLM) actively engages with both the question and image. The LLM formulates hypotheses and conducts active vision verifications, adaptively reasoning based on the results until the LLM itself determines the final answer. This whole process is intimately connected to the task at hand, continually interacting with and adapting to the information gathered, which justifies our use of the term `task-oriented`.
In contrast, existing methods such as Visual Programming and ViperGPT use the LLM merely as a tool for task planning or code generation. They generate an immutable program based solely on the question **without deeply or actively** engaging with the image, and they obtain the answer directly from the execution of vision models. Thus they are **not** `task-oriented` under our definition. In these methods, the LLM can not make adjustments for errors in either program planning or vision models, and their application on questions requiring open-world knowledge is limited.
We appreciate your concern, and we will revise the relevant sections to make this distinction clearer and underline the unique aspects of our method.
**[Reviewer BgCz Weakness-2: Necessity of Hypothesis]**: Please refer to the **Global Response-3**.
**[Reviewer BgCz Weakness-3: Zero-shot setting]**: Our approach and all other compared methods need to employ few-shot instructions. This is because the in-context examples enable the LLM to follow our expected sequential hypothesis-verification reasoning process and give the output as formatted. The output format facilitates the distillation of the essential information from free-form text generated by LLM into executable visual functions. In our proposed method, the LLM acts as the reasoning agent like a human brain which conducts reasoning and task planning by actively utilizing its common-sense knowledge and finally outputs the answer by the LLM itself. While for the ViperGPT, it uses CodeX to directly generate Python code based on the question and derive answers from the execution of the generated code. Since the CodeX is pretrained specifically for code generation, ViperGPT does not require additional instructions to guarantee the output format and can work in a zero-shot setting.
Given these fundamental differences between the two types of methods, we compare our method with the methods that also employ the LLM as a reasoning agent in a few-shot manner in our experiment section. Such comparison is fair, and our experiments have sufficiently demonstrated the effectiveness of our method.
**[Reviewer BgCz Weakness-4: More datasets]**: Please refer to the **Global Response-2**.
**[Reviewer BgCz Weakness-5: Comparison to IPVR[12]]**: Thank you for bringing the IPVR work to our attention, where we have already cited it and discussed it as IPVR [12] in our main paper with explicit comparison in both **Table 1 and 2** in the main paper. In IPVR, the process consists of three key modules: a 'see' module for object-level detection, a 'think' module for selecting attended objects and transforming them into captions, and a 'confirm' module for rationale verification. While the superficial similarities exist, we would like to highlight the fundamental differences between IPVR and our method:
1. **Object-level detection vs. Active image understanding**: In IPVR, they focus on object-level detection and attending, which may either bring irrelevant objects to the question or drop essential information in the caption. However, our method is not constrained by object-level image understanding and captioning, as we utilize various vision functions to actively acquire visual information.
2. **Confirmation Process**: While IPVR also involves a confirmation process, they merely use the LLM to generate a **textual** rationale to support the answer. They confirm the answer when the generated rationale matches the predicted answer. Contrarily, our method's verification part allows the LLM to acquire **new visual evidence** by vision executor based on its hypothesis, and facilitates subsequent rounds of reasoning. The LLM confirms the answer when it has obtained sufficient information to reason about the answer using its open-world knowledge.
By elucidating these differences, we establish that our method, although superficially similar to IPVR, is **essentially different** in its approach and underlying principles. These differences contribute to the uniqueness and innovation of our proposed method. Moreover, these underlying disparities are manifest in our performance results. Specifically, our method's scores of **60.6** and **61.2** on the OK-VQA and A-OKVQA benchmarks, respectively, are significantly superior to IPVR's scores of **44.6** and **46.4** on the same benchmarks, as shown in Tables 1 and 2 in our main paper.
**[Reviewer BgCz Weakness-6: Technical details]**: In designing and implementing our vision executor, we considered commonly used vision models in line with the methods compared in Table 1 and 2 with public code. This makes experimental comparisons fairer, and the re-implementation will not be difficult, since all the models we used have user-friendly API to call. This design choice allows us to highlight the improvements and advantages of our new method over existing methods by eliminating differences in vision models. We will publicly release our full implementation after acceptance and offer help for reproduction.
---
Rebuttal Comment 1.1:
Title: Towards Response.
Comment: Thank the authors for the explanation and supplementary experiments. It's suggested to highlight the explanation about how this model is more `task-oriented` compared with others at the beginning of the paper. Other concerns of mine have been resolved. I'd like to raise the rating. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for your insightful and constructive reviews! In this global response, we want to address the common questions inquired by different reviewers, and also demonstrate the necessity of making hypotheses in our proposed method.
**[Global Response-1: Demonstration of our prompt instructions and in-context examples]** In the one-page pdf allowed by the rebuttal, we provide figures to illustrate the prompt instructions and in-context examples we used in our method. We hope this can help reviewers better comprehend the simple yet effective design principles of our prompts.
**[Global Response-2: About experiments on more datasets]** We greatly value the suggestions from Reviewers **BgCz** and **ckp5** regarding the expansion of our experiments to more datasets and different vision-language tasks. We acknowledge the importance of demonstrating generalizability, but here's why we have opted for a specific focus on Knowledge-based VQA:
- **Our specific focus: Knowledge-based VQA**: First, as explicitly demonstrated in our abstract and introduction, our method targets exploiting LLM for the Knowledge-based VQA task. This is because Knowledge-based VQA requires open-world knowledge to answer the question about an image and is more challenging than traditional VQA tasks [3] since the model needs to extract relevant external knowledge and then perform joint reasoning on the question, image, and the knowledge. Such a challenging requirement essentially demands the LLM to be actively engaging in collecting the essential information from both the question and image, and motivates us to propose our task-oriented active VQA method for Knowledge-based VQA in this paper. Extending our method to more general vision-language tasks is not our current priority in this paper, because we believe that it is more beneficial and important for us to comprehensively demonstrate the effectiveness of our proposed method on the challenging Knowledge-based VQA.
- **Existing datasets for Knowledge-based VQA**: For the task of knowledge-based VQA, OKVQA and A-OKVQA datasets are the standard benchmarks in the existing literature [9, 51, 52, 4, 5, 6, 12, 11, 13]. The comparison of these two datasets with state-of-the-art methods in Table 1 and 2 in our main paper sufficiently demonstrates the efficacy of our proposed method for the knowledge-based VQA.
- **More results on other VQA task**: As complementary, we have conducted additional experiments on the VQAv2 dataset to demonstrate the broader applicability of our method. We achieved **74.8%** (**16 shots**) on a randomly selected subset of VQAv2 val set, which is a competitive few-shot result, compared with the powerful vision-language model Flamingo[a] (**68.4%, 32 shots**). We consider these exploratory results as a stepping stone for future research.
**[Global Response-3: the necessity of the hypothesis-making]** We thank reviewer **BgCz** for inquiring about this question. We believe it is better to treat it as a global response to provide a comprehensive answer. Here is our response:
- **Necessity of Hypothesis**: In our proposed method, the hypothesis is derived from the knowledge of LLM and the current information that it processes. With a clear hypothesis-verification process, LLM will be prompted to actively incorporate and exploit its open-world knowledge for its reasoning, instead of completely relying on the output of the vision models. For instance, when the output of the vision models contradicts common-sense knowledge from the LLM, the LLM instructed by our hypothesis-verification mechanism will tend to collect more information from the image, e.g., make another hypothesis or change to a different verification tool, such that the LLM itself can integrate more comprehensive information to give the answer. Such merit has also been demonstrated in our main paper’s **lines 299-303**, with an example in the bottom-left subfigure of **Figure 3**.
- **The reason why the workflow is smooth in Figure 3**: When we design our instruction and in-context examples, we explicitly instruct the LLM to make hypotheses and verify them in the 'Reasoning' section in our prompting. **Thus the reasoning process of LLM has intrinsically included the process of hypothesis-making, and they are interconnected.** We distinguish ‘Hypothesis’ to keep the output format consistent and clear.
- **Regarding 'many hypotheses are actually None’**: This is due to the multi-round dialogue nature of our demonstration in Figure 3. It appears that many hypotheses in Figure 3 are ‘None’ in single steps, but that does not mean the model did not formulate them in the whole dialogue. In every single round, the ‘Hypothesis’ may be ‘None’ in three cases:
1. LLM lacks sufficient information to make any hypothesis and needs to gather more visual information, often at the beginning of the dialogue.
2. The previous hypothesis is overturned, and a new hypothesis is not made.
3. The answer can be decided by the collected information, and no more hypotheses are required. This is the final round of the dialogue, and the ending is determined adaptively by the LLM agent.
- **Ablation of removing the hypothesis**: Finally, we conducted two supplementary experiments. We first removed only the ‘Hypothesis’ from the output format. It does not obviously influence the overall accuracy, since the LLM is still following the hypothesis-verification stream. Then we rewrote the ‘Reasoning’ in our prompts to **remove the guidance for LLM to make hypotheses** during the ‘Reasoning’, and the performance on the OKVQA dataset dropped significantly from **60.6%** to **55.12%**, which emphasizes the **indispensability of the hypothesis** in our proposed method.
[a] Flamingo: a visual language model for few-shot learning. NeurIPS 2022.
Pdf: /pdf/ee83250a16f3b7c621158765b7f327c31624ce07.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Covariance-adaptive best arm identification | Accept (poster) | Summary: This work focuses on best arm identification, that is, determining the arm with highest average return within a small number of samples (sample complexity). The twist is that authors propose a framework where sets of arms (of arbitrary size) can be queried at the same time, and the set of corresponding rewards is observed. This setting goes beyond classical frameworks, as observations in a set of simultaneously queried arms are dependent. Moreover, the arm pairwise covariances are unknown to the bandit algorithm. Authors propose an algorithm tackling this problem, which nearly matches associated lower bounds, and empirically evaluate it.
Strengths: - Originality: Authors introduce a novel best arm identification setting which might have interesting real-life applications. Differences with prior works are thoroughly discussed and the discussion covers a large set of related settings.
- Quality: The results in this paper seem technically sound, although I did not check the proofs in detail. The experimental study is convincing and supports theoretical claims. The code is available, which helps reproducibility and potential future follow-up.
- Significance: The setting tackled in this paper introduces new technical challenges (in particular to derive lower bounds), and the technical tools used in proofs can be of independent interest.
Weaknesses: - Clarity: I appreciated the many examples provided in order to illustrate intuitions behind the papers, although I believe the section from Line 59 to Line 123 is a bit messy and hard to follow. Adding more structure to this section and clearly highlighting the different points covered in that paragraph would really help.
- Significance: The number of rounds after which an arm can be safely eliminated from the set of “queriable” arms seems really large and not very practical. However, this weakness has been identified in this paper and properly discussed (notably by providing a version of the upper bound when elimination happens when an arm is not more considered a candidate).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What would be the main hurdles in extending the lower bound theorems to any one-dimensional exponential family?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors have adequately addressed the issue related to the number of rounds after which an arm is completely eliminated from future queries. This paper deals with very theoretical guarantees, and, as such, do not present significant direct negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
In the context of developing lower bounds, the conventional approach involves creating two distributions of the arms that are identical except for one arm, which is made optimal in one of the distributions. Subsequently, a lower bound is established on the number of queries necessary to differentiate between these distributions, yielding a lower bound for the number of queries for the specific arm i. However, this method proves insufficient for deriving precise lower bounds in our particular scenario. As an alternative direction, we propose exploring the creation of alternative distributions in which all arms are modified instead of just one.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I thank the authors for their global rebuttal and their answer to my question. As my score is already positive, with respect to potential impact, I will keep the score as it is for now. | Summary: Authors investigate the stochastic Best Arm Identification (BAI) problem when the arms have an unknown correlation structure.
In contrast, the majority of BAI literature focuses on independent arms with unknown variances.
Authors propose algorithms for BAI in two settings (bounded random variables and gaussian random variables) and analyze the sample complexity of these algorithms.
In order to leverage the correlation structure, multiple arms are allowed to be sampled at each round.
They show that improved sample complexity can be achieved by estimating the correlation structure; moreover, the sample complexity is within a constant factor of the sample complexity for independent arms (even when independence is unknown in advance).
Simulations are run to demonstrate the method on a synthetic dataset.
Strengths: The main strength of the paper is an investigation into the stochastic BAI problem where independence between the arms is not assumed.
Authors develop algorithms where the covariance structure is estimated and used to help quickly eliminate suboptimal arms.
Of course, multiple arms must be sampled simultaneously in order to estimate the covariance between them.
On one hand, the analysis (and simulations) show that nice speed-ups in sample complexity can be obtained when the covariance structure is taken into account.
On the other hand, the analysis seems to indicate that the asymptotic rate of the sample complexity does not increase even when the arms are independent (but this dependence is unknown).
This has exciting implications for a variety of real-world settings where arms may be expected to be correlated.
Weaknesses: There are two main weaknesses of the paper.
The first main weakness is that the paper seems to not discuss (and sometimes ignore?) what I would consider to be a very important consideration: the "overhead cost" of estimating the covariance.
It seems intuitive that a method which places no assumptions on the covariances -- and therefore must estimate them -- must incur some overhead cost when compared to a setting where there are structured assumptions on the covariance (e.g. the independent setting).
This overhead cost may not be very large, but I am pretty certain that it should exist.
A couple of times throughout the paper (Lines 103, 247) authors claim that the sample complexity bounds they obtain are smaller than the independent case; however, it seems that they are smaller than a constant factor (say, 2) of the sample complexity of the independent case.
In terms of the asymptotic rate of the sample complexity, this doesn't matter; however, in terms of finite sample, practical performance, this does seem to matter and it points out exactly this "overhead cost".
Overall, I think that the paper would be stronger if authors more appropriately discussed their results in this context, i.e. the existence of the overhead cost of estimating covariance relative to assuming it, but with large gains when dependencies help.
The second weakness of the paper is that the exact nature of the guarantees on the sample complexity are not so clear.
From what I understand, these algorithms need to contain the best arm with probability $1 - \delta$ for some total number of arm queries $N$ and this stopping time is a random variable.
The goal of the algorithm designer is that, under appropriate conditions, $N$ is small in some sense, as a random variable.
So what does it mean then in Theorems 4.1 and 5.1 that authors write $N \leq XYZ$? Does this happen with probability 1 (that seems unlikely)?
Does this mean that $N$ is order $XYZ$ in probability?
Or is $N$ not a random variable?
Perhaps there is some convention in the BAI literature so that this is very clear; but to someone in adjcent fields, this could use clarification.
I'm hopeful that this can be clarified and the fix is relatively easy.
Further comments on clarity and discussion of results are given below.
### Minor Comments
1. (Line 58): What is the $\pi$ subscript in $N$ represent and why does it never appear again in the paper? Moreover, the last sentence in this paragraph seems to be a sentence fragment that is missing a part.
2. (Line 64): Could you describe more precisely what "comparing the means" is referring to here? What is the precise statistical problem i.e. estimation, hypothesis testing etc ?
3. (Line 71): A motivating example of clinical trials is discussed, "whre the effects of drugs on patients with similar traits or comparing drugs with similar components may exhibit underlying correlations". However, the correlation between counterfactual outcomes (arms) can't be simultaneosuly observed in a causal inference setting -- only one outcome may be observed for each patient, often known as the "fundamental problem of causal inference". I recommend rephrasing this motivation so that arms cannot be interpreted to be counterfactual (and thus not simulatenously observable) outcomes, or selecting a different motivating example.
4. (line 103): authors write that "quantity (3) is always smaller (up to a numerical constant) than its independent case counterpart". I think this is phrased in an unfortunately misleading way. I presume that the authors mean "quantity (3) is no larger than a constant times the independent case counterpart". Unfortunately, writing the phrase "always smaller" gives the wrong impression that quantity (3) is at most quantity (2), which does not seem to be true. Great care should be given to statements like this, so they are not misrepresenting what authors have shown.
5. (Line 140, 157) citations appear in parathensis when other citations appear in brackets. A consistent stlye should be used throughout the paper.
6. (Line 142) "The objective is to maximize the cumulative regret..." is this a typo, i.e. "maximize" should be "minimize"?
7. (Line 172): The estimated quantities $V_{i,j,t}$ and $\mu_{i,t}$ should be defined (markdown doesn't like the hat notation, sorry for dropping it). In particular, these quantities are perhaps ill-defined without reference to the algorithm: how should we interpret the sample covariance up to iteration $t$ if arm $i$ was sampled at some round $s \leq t$ but $j$ was not?
8. (Line 7 of Algorithm 2): "Jointly query all the experts in $C$". The terminology "experts" is not defined, so I'd recommend not using it here.
9. (Line 247): Authors write "the sample complexity is on the order $X$ which is larger than both our bounds (5) and (6)". If I understand correctly, it seems that the correct statement is that "(5) and (6) are less than 2 $X$". It seems very important for authors to correctly characterize their contributions so readers do not misinterpret them.
10. (Line 275): the reference (20) is perhaps misplaced.
11. (Line 305): "$\delta$-sound algorithm" is used but not defined. From context clues, I think I understand, but authors should clarify it for the readers.
12. (Line 309): Authors claim that "Algorithm 2 is nearly optimal" but this is up to a $\log(K)$ and $\log(\Lambda)$ factors. I understand $K$, but I think that $\log(\Lambda)$ deserves some discussion. How large should we expect this to be?
13. (Line 319): "Theorem 6.2 demonstrates that our algorithm achieves near-optimal performance...up to a logarithmic factor". Same as above.
14. (Line 329): Authors write that "We stress that both variants guarantee a $\delta$-sound decision on the optimal arm" but it seems that reducing $82 t$ to $2 t$ actually doesn't necessarily satisfy $\delta$-sound? If I am correct in this understanding, then the claim should be modified appropriately.
15. (Line 356): Authors write that there is "an algorithm that compares candidate arms with convex combinations of the remaining arms" but this is not discussed in the main body nor in the simulations. This seems it requires more discussion to better contextualize it within the current work or be removed all together.
16. (Line 347): Authors write "In both experiments, we observe that Pairwise-BAI+ performs worse compared to Pairwise-BAI, indicating that, empirically, in the given scenarios, continuing to sample sub-optimal arms does not contribute to improved performance". There is either a typo here or I am a bit confused. I thought that Pairwise-BAI+ was where the number of extra rounds was reduced, so if it performs worse wouldn't that indicate that we need those extra rounds? I think there is a typo, but I want to bring this to the attention of the authors to confirm or deny.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I think this paper makes some interesting and strong contributions.
Unfortunately, I found some of the results and their discussions to be sufficiently confusing to a broader audience that I have set my score to "borderline reject".
However, if authors can answer the following questions to clarify these points (especially the first two), then I am more than happy to raise my score.
1. Can you please respond to my comments on the confusion regarding what $N \leq XYZ$ means if $N$ is a random variable? I'm hopeful that you can clarify this for me.
2. Can you please respond to my comments on the overhead cost of estimating the covariance structure and whether my assessment that the sample complexity bounds are within a factor of two of the independent case is correct?
3. Can Assumption 3 be relaxed to subgaussian? If the answer is no, then why not?
4. Can you comment more on the "algorithm that compares candidate arms with convex combinations of the remaining arms"? Why does it not appear in simulations and does it offer any additional benefits over the proposed algorithms?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
* **On Question 1:** about the formulation of our results. We apologise for not being clear. Please note that such presentation of the guarantees is standard in the literature of best arm identification with fixed confidence, see for instance: Karnin et al. (2013) and Jamieson et al. (2014). Allow us to clarify that the total number of queries, $N$, is a random variable depending on the learner's strategy and the observed samples. The guarantees we present in Theorems 4.1 and 5.1 take the form of an upper bound on the variable $N$, which holds with a probability of at least $1-\delta$. The statement of these theorems says that with probability at least $1-\delta$: [the algorithm finds the best arm --and-- the total number of queries satisfies $N\le f(\delta)$].
* **On Question 2:** Regarding the overhead cost of unknown covariances, we acknowledge that there two different costs depending on the variables' distributions:
+ For bounded variables: If variances were known, we could carry the same analysis using the standard Bernstein inequality (exposed in Boucheron et al 2013, Theorem 2.10) with exact variances. However, since variances are not known, we use the empirical Bernstein inequality (Maurer and Pontil 2009), in which the true variance is replaced by its empirical counterpart. The (accuracy) cost when we plug the empirical estimate appears in the numerical constant of the sub-exponential tail term (the $\mathcal{O}(1/n)$ term) where we obtain a constant of $7/3$ instead of $1/3$ in the standard Bernstein version. This cost, as pointed out by the reviewer, is merely a numerical factor when compared to the sample complexity of algorithms designed explicitly for independent arms. On the other hand, when arms are dependent, our guarantee demonstrates that being adaptive to covariance can result in a substantial improvement, possibly by an arbitrary factor, in certain scenarios as highlighted in the introductory examples in lines 78-107.
+ For Gaussian variables the overhead of unknown variances takes a different form: if the sample size is larger than $\log(1/\delta)$, the cost of plugging in the empirical variance estimate into the Chernoff's concentration inequality is only a multiplicative constant slightly larger than one (nearly $1+2\sqrt{\log(1/\delta)/n}$). However, in the case of a small sample regime ($n<\log(1/\delta)$), the cost is a multiplicative factor of $\exp(\sqrt{\log(1/\delta)/n}+1/2)$ due to the nature of the left tail of the chi-squared distribution (we refer the reviewer to the Sections D and K of the appendix for detailed calculations). We would like to draw the reviewer's attention that, for most natural regimes, the number of queries made for each arm is larger than $\log(1/\delta)$, hence the last described effect does not arise. However, in some specific regimes (such as the case of very small variances of the arms) an optimal algorithm should query less than $\log(1/\delta)$ samples, which necessitates introducing the exponential multiplicative term presented above into the concentration upper bound. This translates into a different form of guarantee presented in Theorem 5.1, inequality (8). It is important to note the cost in this regime cannot be avoided as highlighted by our lower bound presented in Theorem 6.2.
+ In summary, the overhead cost of unknown covariance is rather mild except in very specific regimes.
* **On Question 3:** Regarding the extension to sub-Gaussian variables, we would like to clarify that our algorithm relies on the empirical Bernstein inequality, which was originally designed in the literature for bounded variables. However, we have extended this inequality to accommodate Gaussian variables by leveraging existing concentration results. It is essential to note that developing such inequalities for sub-Gaussian variables is generally a non-trivial task. One possible direction to extend the considered class of distributions is to suppose that arms follow a sub-Gaussian distribution and satisfy a Bernstein moment assumption (such extensions were pointed out by works on bounded variables e.g., Balsubramani and Ramdas 2016). Given the last class of function, we can build on the standard Bernstein inequality with known variance, then plug-in an estimate of the empirical variance leveraging the concentration of quadratic forms (see Bellec 2019). However, it remains uncertain whether an extension for sub-Gaussian variables (without additional assumptions) is practically feasible.
* **On Question 4:** The algorithm that compares candidate arms with convex combinations of the remaining arms is developed in Section B of the appendix. It was not included in the main body due to the limitation on the maximum number of pages allowed. In this algorithm, we compare candidate variables with a convex combination of other variables. In some cases, it can lead to improved guarantees. We refer the reviewer to the paragraph between lines 464 and 471 in the appendix for the motivation behind introducing this algorithm.
* We thank the reviewer for pointing out the minor remarks and identifying typos. We will certainly consider and address these observations in the final version.
Z. Karnin, T. Koren, and O. Somekh. Almost optimal exploration in multi-armed bandits. ICML, 2013.
K. Jamieson, M. Malloy, R. Nowak, and S. Bubeck. lil’UCB: an optimal exploration algorithm for multi-armed bandits. COLT, 2014.
Balsubramani and Ramdas, Sequential nonparametric testing with the law of the iterated logarithm, UAI 2016.
P. C. Bellec. Concentration of quadratic forms under a Bernstein moment assumption. arXiv preprint 2019.
Boucheron S., Lugosi G. Massart P. (2013). Concentration Inequalities: A Nonasymptotic
Theory of Independence. Univ. Press, Oxford.
Andreas Maurer and Massimiliano Pontil. Empirical Bernstein bounds and sample variance penalization, 2009.
---
Rebuttal Comment 1.1:
Title: response to authors
Comment: I thank the authors for their thoughtful response to my review.
**Question 1**:
I thank author for their answer.
This confusion seems to be entirely mine.
**Question 2**:
I completely acknowledge that the overhead cost can be mild in typical cases, especially with strong correlations. It just seems like it must be there in the "worst case". By reading other responses, I think authors agree. It would be good to be more transparent in the paper about this.
**Question 3**: thanks for the helpful elaboration!
**Question 4**: thanks.
I will raise my score to reflect the updated answers. | Summary: This paper considers the problem of best arm identification with covariance in the fixed confidence setting, where arms can be dependent and rewards can be sampled simultaneously. The authors design algorithms that adapt to the unknown covariance of arms and prove that substantial improvement can be achieved over the standard setting. The authors also provide lower bounds and experimental results that support their theoretical findings.
Strengths: 1. The considered problem is interesting and well-motivated.
2. The theoretical analysis looks sound. Both upper and lower bounds are provided.
Weaknesses: 1. What is the additional novelty and contribution of Theorem 5.1 compared to Theorem 4.1. It seems that there is no significant difference between the algorithm design and analysis for the bounded reward setting and that for the Gaussian reward setting. Why not unify them, e.g., present this work in the unified sub-Gaussian setting? Please correct me if my understanding is wrong.
2. Could you compare Theorems 4.1 and 5.1with the results of existing covariance-adaptive bandit works?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
* **On Question 1 (part 1):** Comparison of Theorems 4.1 and 5.1: Theorem 4.1 deals with bounded variables. Here, the sum of bounded variables may exhibit a sub-exponential tail, which leads to the additional $1/(\mu_i-\mu_j)$ term in the complexity of comparing two variables. This additional term does not appear for Gaussian variables in Theorem 5.1. Hence the guarantees for the two classes of variables are different.
* **On Question 1 (part 2):** Recall that we provided an additional guarantee (that was proved specifically for Gaussian variables) in Theorem 5.1 inequality (8), which is sharper than inequality (7) in the regime where variances are very small.
* **On Question 1 (part 3):** Regarding the extension to sub-Gaussian variables, we would like to clarify that our algorithm relies on the empirical Bernstein inequality, which was originally designed in the literature for bounded variables. However, we have extended this inequality to accommodate Gaussian variables by leveraging existing concentration results. As noted previously, the result in this setting has a somewhat neater form, since it eschews the additional sub-exponential term. It is essential to note that developing such inequalities for sub-Gaussian variables is generally a non-trivial task. One possible direction to extend the considered class of distributions is to suppose that arms follow a sub-Gaussian distribution and satisfy a Bernstein moment assumption (such extensions were pointed out by works on bounded variables e.g., Balsubramani and Ramdas 2016). Given the last class of function, we can build on the standard Bernstein inequality with known variance, then plug in an estimate of the empirical variance leveraging the concentration of quadratic forms --see Bellec (2019). However, it remains uncertain whether an extension for sub-Gaussian variables (without additional assumptions) is practically feasible.
* **On Question 2 (part 1):** about comparison with existing covariance-adaptive bandits works. While covariance-adaptive approaches have been explored in the context of combinatorial semi-bandits, it is important to note that the setting and objectives of this class of problems differ from ours. In our setting (Protocol 1), the learner has the flexibility to query any subset of arms, and the ultimate goal is to identify the single best arm. On the other hand, in combinatorial semi-bandits, the learner selects a subset from a predetermined set of subsets $\mathcal{M} = \left\lbrace M_1, \dots, M_d \right\rbrace$ at each round $t$, where $M_i \subset \[K\]$ and $\lVert M_i \rVert_1=m$ for a specific problem parameter $m$. However, the objective in the latter case is not to identify the best arm individually but rather to find the subset $M_i$ with the largest sum of rewards. Given these differences, we refrained from comparing the guarantees obtained in our work with those of combinatorial semi-bandits. The dissimilarities in problem structures and primary objectives lead us to believe that such a comparison would not be meaningful.
* **On Question 2 (part 2):** As mentioned in the related work section, various works in the literature have focused on developing strategies that adapt to the variances of the arms. In contrast, our guarantees go beyond being solely adaptive to arm variances; they also incorporate adaptivity to the covariances between different arms
In lines 242-248, we have provided a comparison of these two types of guarantees. Specifically, when arms are independent, our guarantees exhibit adaptivity to variances, though with a numerical factor of $2$. However, when arms are correlated, the improvement achievable with our procedure can be an arbitrary factor, favoring our approach.
Balsubramani and Ramdas, Sequential nonparametric testing with the law of the iterated logarithm, UAI 2016.
P. C. Bellec. Concentration of quadratic forms under a Bernstein moment assumption. arXiv preprint 2019.
---
Rebuttal Comment 1.1:
Title: Thank the authors for their response
Comment: Thank the authors for their response. My concerns were addressed. I raised my score from 5 to 6. | Summary: This paper focuses on the question of identifying $\epsilon$-optimal arms given a confidence input $\delta$, or in other words, under the PAC model. Instead of pulling only one arm and observing the rewards, the authors leverage the underlying structure of arm distributions by allowing multiple queries per round (Protocol 1). Compared to related works, this paper looses the assumption of independent arms distributions, Protocol 1 can estimate the means and covariances of arms and accelerate the best arm identification by utilizing the extra information. They propose two algorithms, one for scenario where the arms are bounded, and one for arms follow a Gaussian distribution. The lower bounds for both algorithms are also provided.
Strengths: The idea of exploring the underlying structure of arms by allowing simultaneous queries is attractive. It not only solves the assumption of independent arms distributions, which is not realistic in various scenarios; but also accelerates best arm identification by leveraging the shared information between arms. Based on multiple queries protocol, the authors provide two main theorems for bounded variables and Gaussian distributions along with the corresponded lower bounds. They also manage to notice some failure mode of the algorithm, like shown in line 183, and give some solution. In addition, the paper is well-structured and easy to follow.
Weaknesses: The authors emphasize their results are adaptive to unknown covariance, but based on the algorithms and simulations, it can only solve the type of same covariance between all arms (correct me if I were wrong), which reduces the practical use of this paper. In addition, as stated in the related work part, the topic of best arm identification under PAC-Learning framework is well-studied, so as the stochastic combinatorial semi-bandit problem. The motivation and main contributions of this papers are a little bit unclear. As for the simulation part, It would be helpful to provide some explanation for the three chosen benchmark algorithms to add on reliability. I was expecting to see the comparison of proposed algorithm and old algorithms who assume independent arms on scenarios where the assumption fails.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Under the PAC framework, except for the confidence input $\delta$, the subtle difference $\epsilon$ is also important and related to the sample complexity, but I didn't find the involvement of this parameter in algorithms (like Theorem 4.1, line198).
2. If the assumption of arm independence holds, can we still use the proposed algorithms? Would there be a trade-off between complexity and generalization?
3. Could you provide a concrete example of the failure case mentioned in line 183? And how probably will this kind of failure happen?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper trying to utilize the covariances between arms to accelerate best arm identification, what if we step further, using the causal relationship instead of association? I know some research have been done in this area (like Lattimore, 2016) It would be interesting to consider this as a future direction. The proposed algorithms, although allow the dependent arms distributions, still have many constraints, like limited type of variables, only suitable for same covariance, pairwise comparisons, which reduce the practicality of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
* **About the reviewer's summary and Question 1:** Please note that our objective in this paper is not identifying $\epsilon$-optimal arms (also known as $(\epsilon,\delta)$-PAC setting) but identifying the (exact) best arm with probability at least $1-\delta$.
* **About the first point raised in the weaknesses section:** It is unclear to us what the reviewer suggests by "it can only solve the same covariance between all arms (correct me if I were wrong)". We would like to emphasize that we make no structural assumptions on the covariance between arms and our guarantees hold for any arbitrary covariance matrix of the joint distribution of the arms variables. In the bounded case, of course, the possible covariance matrix is implicitly constrained by the boundedness assumption.
* **About Question 2:** In case the arms are independent our guarantees recover the standard known sample complexity up to a constant factor. However, whenever the arms variables have a strong correlation, our guarantees show that a significant improvement is made.
* **About Question 3:** The failure case mentioned in line 183 may occur in some very specific scenarios. To illustrate consider the following toy example with $3$ arms: $X_1 \sim \mathcal{N}(\epsilon+\epsilon^{3/2},1)$, $X_2 = X_1-Z_2$ where $Z_2 \sim \mathcal{N}(\epsilon, \epsilon)$ and $X_3 = X_2-Z_3$ where $Z_3\sim \mathcal{N}(\epsilon^{3/2}, (\epsilon/\kappa)^2)$, $\kappa>1$ is a constant to be specified and $X_1, Z_2$ and $Z_3$ are independent. Denote $\Lambda_{ij} = \frac{\text{Var}(X_i-X_j)}{\mathbb{E}[X_i-X_j]^2}$. Hence, in the previous example: $\Lambda_{12} = 1/\epsilon$, $\Lambda_{23} = 1/(\kappa^2\epsilon)$ and $\Lambda_{13} = (\epsilon+(\epsilon/\kappa)^2)/(\epsilon+\epsilon^{3/2})^2$. For $\epsilon$ very small we have $\Lambda_{12}, \Lambda_{13} \sim 1/\epsilon$ and $\Lambda_{23} \sim 1/(\kappa^2 \epsilon)$. Observe that the stopping time for comparing arms $i$ and $j$ (based on the test we considered) is a random variable $\tau$. We showed (Lemma E.4) that with high probability it belongs to the interval $[c_{\inf}\log(1/\delta_{\tau}) \Lambda_{ij}, c_{\sup}\log(1/\delta_{\tau})\Lambda_{ij}]$ (where $c_{\inf}<c_{\sup}$ are numerical constants). Therefore, in this example, arm $2$ eliminates arm $3$ faster than arm $1$. However, we may have that arm $1$ eliminates arm $2$ first at round $\sim c_{\inf} \Lambda_{12}$ (this can happen if we choose $\kappa^2 < c_{\sup}/c_{\inf}$), then we need to compare arm $1$ with arm $3$ paying a cost higher in average than the cost of comparing arms $2$ and $3$. | Rebuttal 1:
Rebuttal: We thank the reviewers for the valuable feedback. We address below some points raised by the reviewers:
* **Link with bandits literature with dependent arms:**
+ **Bandits on graphs:** Previous studies on graph-based bandit problems with side observations in the stochastic setting (such as Caron et al., 2012) also explore the potential of simultaneously observing rewards from multiple arms. However, in these investigations, it is assumed that the distributions of arms are independent. We are not aware of any existing research on bandit problems in graph-based scenarios where the learner successfully attains adaptability to covariance among arms.
+ **Combinatorial semi-bandits:** about comparison with existing covariance-adaptive bandits works. While covariance-adaptive approaches have been explored in the context of combinatorial semi-bandits, it is important to note that the setting and objectives of this class of problems differ from ours. In our setting (Protocol 1), the learner has the flexibility to query any subset of arms, and the ultimate goal is to identify the single best arm. On the other hand, in combinatorial semi-bandits, the learner selects a subset from a predetermined set of subsets $\mathcal{M} = \\{ M_1, \dots, M_d \\}$ at each round $t$, where $M_i \subset \[K\]$ and $\lVert M_i\rVert_1 \le m$ for a specific problem parameter $m$. However, the objective in the latter case is not to identify the best arm individually but rather to find the subset $M_i$ with the largest sum of rewards. Given these differences, we refrained from comparing the guarantees obtained in our work with those of combinatorial semi-bandits. The dissimilarities in problem structures and primary objectives lead us to believe that such a comparison would not be meaningful.
* **On extending the assumptions to include sub-Gaussian variables:** We would like to clarify that our algorithm relies on the empirical Bernstein inequality, which was originally designed in the literature for bounded variables. However, we have extended this inequality to accommodate Gaussian variables by leveraging existing concentration results. It is essential to note that developing such inequalities for sub-Gaussian variables is generally a non-trivial task. One possible direction to extend the considered class of distributions is to suppose that arms follow a sub-Gaussian distribution and satisfy a Bernstein moment assumption (such extensions were pointed by works on bounded variables e.g., Balsubramani and Ramdas 2016). Given the last class of function, we can build on the standard Bernstein inequality with known variance, then plug in an estimate of the empirical variance leveraging the concentration of quadratic forms (see Bellec 2019). However, it remains uncertain whether an extension for sub-Gaussian variables (without additional assumptions) is practically feasible.
* **Position of our contributions with respect to the standard bandits setting and variance adaptive A/B testing:** Please refer to the second and third point of the rebuttal on reviewer THM3 for a more detailed discussion. In the standard BAI setting (one query per round) previous works (Mnih et al 2008, see our related work section) developed strategies that are adaptive to the variances of arms. In our setting, by allowing simultaneous queries we developed strategies that are adaptive to the variances of arms and covariances between arms. As shown by our introductory toy examples in lines 78-107, some basic intuitions in the standard setting no longer hold: for instance, we show that a sub-optimal arm may be eliminated by another sub-optimal arm much faster than by the optimal arm. Our main contribution is to develop strategies taking into account such observations among others, leading to guarantees that improve substantially the guarantees of the algorithms developed in the standard setting.
* **Regarding the cost of not knowing the variances:** Please refer to the second point of the rebuttal on reviewer pLAq for a more precise discussion. Regarding the overhead cost of unknown covariances, we acknowledge that in the scenario of independent arms, there is indeed a cost associated with being adaptive to covariances. However, we agree that this cost is merely a numerical factor when compared to the sample complexity of algorithms designed explicitly for independent arms. On the other hand, when arms are dependent, our guarantee demonstrates that being adaptive to covariance can result in a substantial improvement, possibly by an arbitrary factor, in certain scenarios. This improvement highlights the significance of considering covariance information in such situations and justifies the usefulness of our approach in dealing with dependent arms.
Volodymyr Mnih, Csaba Szepesvári, and Jean-Yves Audibert. Empirical Bernstein stopping. ICML 2008.
Balsubramani and Ramdas, Sequential nonparametric testing with the law of the iterated logarithm, UAI 2016.
P. C. Bellec. Concentration of quadratic forms under a Bernstein moment assumption. arXiv preprint 2019.
Caron, Stéphane, Branislav Kveton, Marc Lelarge, and Smriti Bhagat. “Leveraging side observations in stochastic bandits.” UAI 2012. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This study addresses the problem of best arm identification (BAI) in the context of dependent arms. Unlike conventional settings, efficiency can be enhanced by exploiting the inherent correlation structure. This setting holds broad applications, including in clinical trials. Specifically, the authors concentrate on bandit scenarios with bounded and Gaussian rewards. The validity of their approach is substantiated through simulation studies.
Strengths: The authors introduce a novel setting for Best Arm Identification (BAI) with fixed confidence. By accommodating correlation, we can devise more efficient strategies for identifying the optimal arm. This setting is intriguing and holds significant practical utility.
Weaknesses: I believe several claims are not sufficiently substantiated. I have detailed these claims in the subsequent 'Questions' section. If the authors fail to adequately address my queries, these unsupported claims would constitute a weakness in the study.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In line 81, what does $\epsilon\mathcal{N}(1, 1)$ denote?
- In line 83, the authors suggest that since two queries are possible, the learner can perform a $T$-test. However, even in standard BAI, a setting based on such a hypothesis test, as suggested by the authors, exists. For instance, refer to Balsubramani & Ramdas (2016).
- Therefore, considering a hypothesis test in itself is feasible even in the standard setting. I believe that the key here is the variance reduction effect facilitated by two queries.
- In lines 84-86, the authors present an example of two-armed Gaussian bandits. They argue that the lower bound in standard BAI is $O((1+\epsilon)^2\log (1/\delta) / \epsilon^2)$. This seems to be a lower bound for the one-armed bandit problem. As highlighted by Kaufmann et al. (2016), the lower bound for two-armed Gaussian bandits is given as $O(Var(X_1 - X_2)) \log (1/\delta) / \epsilon^2)$ by assuming there are two arms, $Y_1 \sim N(\epsilon, Var(X_1 - X_2))$ and $Y_2 \sim N(0, 0)$. As I stated earlier, the essence of this example is not two queries, but treating two arms as one and comparing this combined arm to an arm that returns zero.
- For the related work section, I suggest considering the literature on BAI on graphs as it also deals with similar problems. Although the authors cite a few studies in this field, could these studies be related to yours?
- In line 125, the authors state that "BAI in the fixed confidence setting was studied by [8], [19], and [9], where the objective is to find $\epsilon$-optimal arms under the PAC model." Fixed confidence BAI is not confined to $\epsilon$-best arm identification. For instance, [17] considers a different objective.
- Subsequently, the authors claim that "A summary of various optimal bounds for this problem is presented in [5, 17]." Here, [5] considers lower bounds in BAI with a fixed budget, which is a different setting. [17] addresses the fixed-confidence BAI problem, but the objective is not $\epsilon$-optimal arms.
- In addition to the related work raised by the authors, it seems that Kato and Ariu (2021) have explored BAI with dependent arms for more specific cases. For example, they consider two-armed Gaussian bandits where rewards are correlated via contextual information. How is their study relevant to yours?
Balsubramani and Ramdas, Sequential nonparametric testing with the law of the iterated logarithm, UAI 2016.
Kato and Ariu, The Role of Contextual Information in Best Arm Identification, 2021.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
* **On Question 1**: The notation $\epsilon \mathcal{N}(1,1)$ stands for $\epsilon X$, where $X \sim \mathcal{N}(1,1)$. We can write $\mathcal{N}(\epsilon,\epsilon^2)$ if it appears clearer, though we wanted to emphasize $\epsilon$ as a scaling factor.
* **On Questions 2 and 3**: We absolutely agree that there are indeed many existing approaches based on arm elimination, which are all akin to performing sequential tests. What we want to emphasize here is that with 2 simultaneous queries, conceptually we can use a _paired samples_ $t$-test which is not possible in standard BAI. As noted by the reviewer, this allows for variance reduction. The work by Balsubramanu and Ramdas (2016) focuses on sequential hypothesis testing to make decisions between two possibilities. In their setup, they assume independent samples and utilize the empirical Bernstein concentration inequality tailored for bounded variables to adapt to the variances of the two variables under consideration. Similar adaptive approaches have been explored in other studies, and achieving adaptivity to arm variances is not a novel concept, as highlighted in our related work discussion. However, our main contribution lies in addressing the multiple arms case with arbitrary dependence. In this scenario, as explained in the introductory discussion (lines 78-99), certain intuitions from the standard setting no longer hold. For instance, it is possible that a sub-optimal arm may be eliminated by another sub-optimal arm more rapidly than by the optimal arm, which presents new challenges when dealing with multiple arms. Our approach is designed to tackle this situation and provide a solution in such settings. We will add a discussion in the related work section on Balsubramanu and Ramdas (2016) highlighting the common points of our procedure to theirs.
* **On Question 4 (part 1)**: The 2-armed bandits case discussed in lines 78-99 serves as an illustrative example and does not represent a stand-alone contribution. Our primary objective is to address the more general $K$-armed bandit case. It is crucial to note that in the standard Best Arm Identification (BAI) setting, only one query per round is allowed. Therefore, the reduction involving comparing the difference arm "$X_1-X_2$" and $0$ is not feasible, as the learner lacks access to samples of the variable $Y_t = X_{1,t}-X_{2,t}$ directly. Instead, the learner has access to samples from $Y' \sim X_{1,t}-X_{2,s}$, where $s\neq t$ (resulting in $X_{1,t}$ and $X_{2,s}$ being independent). In this context, it is important to observe that $\text{Var}(Y') = \text{Var}(X_{1,t}-X_{2,s}) = \text{Var}(X_1) + \text{Var}(X_2)$, which can be much larger than $\text{Var}(Y) = \text{Var}(X_{1,t}-X_{2,t}) = \text{Var}(X_1-X_2)$. Achieving adaptivity to the latter variance is only possible with two queries per round, which is a crucial component for our results in addition to treating two arms as one in the 2-armed setting. In summary, being adaptive to the last variance is only possible with two queries per round, which is an important ingredient for our results besides treating two arms as one in the 2 armed setting.
* **On Question 4 (part 2)**: Concerning the lower bound in the 2-arm setting: we are uncertain what the reviewer means with the 'one-arm bandit problem'. At any rate, reducing the (full) observation of the two arms $(X_1,X_2)$ to that of $(X_1-X_2, 0)$ results in a loss of information. Thus, a lower bound for the latter reduced setting does not (at least not without additional arguments) logically entail a lower bound for the initial setting. This is, however, somewhat beside the point we want to make, which is that
if we adhere to the standard BAI setting with one query per round, Kaufmann's lower bound is applicable and yields a sample complexity of $\mathcal{O}(\log(1/\delta)/\epsilon^2)$. However, it is essential to note that this lower bound does not extend to the multiple query case. In our work, we have presented a new lower bound specifically addressing the multiple query scenario in the $K$-armed bandits setting (Section 6).
* **On Question 5: Bandits on graphs**: Previous studies on graph-based bandit problems with side observations in the stochastic setting (such as Caron et al., 2012) also explore the potential of simultaneously observing rewards from multiple arms. However, in these investigations, it is assumed that the distributions of arms are independent. We are not aware of any existing research on bandit problems in graph-based scenarios where the learner successfully attains adaptability to covariance among arms.
* **On Question 8**: The study by Kato and Ariu (2021) examines the scenario where arms are correlated through contextual information. The primary distinction between their setting and ours lies in how the dependency between arms is accessed. In their work, the dependency between arms is accessible only through the context variable $X$, whereas in our approach, we directly obtain samples of the correlations between candidate's arms by jointly querying two arms. Additionally, another notable difference is in the context samples $X_t$ received at each round $t$. In their work, these context samples are independent of the arms chosen. However, in our case, the "observed dependency" between candidate arms is directly related to the specific arms queried in each round, allowing us to have direct access to correlations among the queried arms.
Caron, Stéphane, Branislav Kveton, Marc Lelarge, and Smriti Bhagat. “Leveraging side
observations in stochastic bandits.” UAI 2012.
---
Rebuttal 2:
Comment: Dear Reviewer,
Please reply to the rebuttal and indicate whether it clears your concerns, or at least acknowledge whether you have read the response. This is important to the authors.
Thanks,
Your Area Chair | null | null | null | null | null | null |
Model-Based Control with Sparse Neural Dynamics | Accept (poster) | Summary: This paper proposes a method for pruning neural networks with ReLu activation functions during training. When employed for learning the dynamics of a control system, this often leads to networks with few activation functions performing similarly as large networks. This allows to apply mixed integer programming techniques to determine optimal control policies. The effectiveness of this approach is demonstrated in a comparison with a sampling based optimizer on several tasks for a robotic manipulator.
Strengths: The idea of formulating planning problems with neural network dynamics as mixed integer program seems novel to me. Moreover, I find the proposed approach of reducing the number of neurons during training very interesting, but I cannot comment on its novelty since I am no expert on this topic. Overall, the paper reads very well and has a good structure. The nice demonstration of the proposed method in a real-world robotic experiment is also a strength of the paper.
Weaknesses: My biggest concern is the missing discussion of computational complexity/computation time. I am not an expert on mixed integer programming, but a quick search suggests that even mixed integer linear programming is already NP hard, so this seems to be a problem in general. I understand that this problem can be mitigated through a sufficiently small number of neurons (‘If only a few ReLUs are left in the model, Equation 10 can be efficiently solved to global optimality’), but I am missing a clear specification what sufficient means in this context. Moreover, I would expect it to be connected to the prediction horizon, i.e., the number of time steps considered in the sum in (10). Therefore, I think a complexity (e.g., in O notation) should be provided to give the reader an impression how severe the computation times grows. This weakness also limits the usefulness of the evaluation in my opinion. Why would you reduce the number of neurons for MPPI in practice? I think the performance in relation to computation time is rather the important metric to look at for this comparison, i.e., a complexity-performance trade-off comparison would be essential. Therefore, the comparison seems a little unfair at the moment. This similarly extends to the robot experiment in Sec. 4.3, where the sampling and the horizon rate of the MPC are not specified. Moreover, it is not clear how fast the robot moves. These are all parameters which crucially influence how challenging the problem is.
In general, the method seems tailored to piece-wise affine systems or systems that look almost like that. The examples seem to go into this direction, but I cannot say for sure since I could not find information what dynamics are actually learned in the robotic examples. I think it would be interesting to see how many activation functions are needed for accurately learning highly nonlinear dynamics, e.g., cart-pole swing-up, and how this affects the control performance.
Overall, the novelty seems to be mainly the (straightforward) connection of existing ideas, even though I admit that it is a very clever combination. The proposed reduction of neurons is not even targeted to achieve a high control performance, but only to maintain high model accuracy. High control performance is only addressed a posteori by selecting the best model observed in experiments. Doing it as described in lines 206-208 runs the risk of executing a potentially bad controller on a real system. This seems like a dangerous thing to generally do. I think it would be much more interesting to directly optimize the model for achieving the best control performance.
Finally, I find the comparison in the numerical evaluation a little weak. When no experiment has to be done like for the open-loop planning performance evaluation, I think more methods than only MPPI should be investigated. I am not an expert, but I know there exists more than one planning/control method for nonlinear systems, e.g., MPC with NN dynamics (Salzmann et al., 2023). Moreover, I do not understand why the learned dynamics model is not used as environment for the classical RL methods. It is apparently accurate enough to allow for a direct transfer to a real-world experiment. So why are the interactions limited to a relatively small number with the real environment, when a large number of interactions with a potentially only slightly more inaccurate model are available?
T. Salzmann, E. Kaufmann, J. Arrizabalaga, M. Pavone, D. Scaramuzza and M. Ryll, "Real-Time Neural MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms," in IEEE Robotics and Automation Letters, vol. 8, no. 4, pp. 2397-2404, April 2023
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: What is the point of the regularization in (9) if you bound the total number of ReLu activation functions through $\epsilon$ in (5) anyways?
What is a closed-loop planning performance? Or do you mean closed-loop control performance?
In (8), is $z_{ij}$ the approximation of $\pi_{ij}$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Potential negative societal impact is not mentioned in the paper. Some limitations of the proposed approach (e.g., restriction to ReLu activation functions) are mentioned throughout the paper, but there is no dedicated paragraph about limitations. I think there are some limitations (e.g., computational complexity) that should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing our paper, and for your constructive suggestions that helped improve our work.
> I think a complexity (e.g., in O notation) should be provided to give the reader an impression how severe the computation times grows.
We agree with the reviewer that a more precise description of the runtime is useful for the readers. The complexity is exponential in the number of ReLUs in the worst case, but in practice solvers like Gurobi are highly optimized, and their branch-and-bound implementations can solve medium-scale problems very quickly, although the performance highly depends on the problem formulation, problem conditioning, and the tightness of the convex relaxation.
> I think the performance in relation to computation time is rather the important metric to look at for this comparison, i.e., a complexity-performance trade-off comparison would be essential.
We agree that a complexity-performance trade-off is essential, and we used the number of remaining ReLUs in the dynamics model as a proxy for computation complexity. The solve time in practice depends on the problem formulation, conditioning, initialization, and various solver techniques and heuristics. In our experiments, solving a MIP with Gurobi induces a high variance in computation time across different tasks, dynamics model architectures, and problem initializations, as shown in Rebuttal PDF Fig. 3 left.
Although the wall clock computation time exhibits high variance for the same neural dynamics model, we observe that the computation time is positively correlated with the number of ReLUs in the network, and thus chose to use the number of remaining ReLUs as an effective proxy for computation complexity in our experiments.
> the method seems tailored to piece-wise affine systems or systems that look almost like that
Our method focuses on neural networks with ReLU activations, which act as piecewise affine functions. With a sufficient number of pieces, they can exhibit extremely strong approximation power and can be used to approximate nonlinear functions that are smooth arbitrarily well. Piecewise affine systems have been widely studied and used in the literature [Sontag et al.] to approximate highly complex and nonlinear systems.
Our experiments examined the applicability of our approach to systems that are not piecewise affine. The dynamics of the rope in our Rope Manipulation task are not piecewise affine. The Object Pushing task involves modeling the object’s orientation, which is also not piecewise affine.
> The proposed reduction of neurons is not even targeted to achieve a high control performance, but only to maintain high model accuracy.
Using the same control optimization tool, we believe that a dynamics model with higher accuracy leads to better control performance as long as the model size does not need to be too large for the control optimization tool to solve within reasonable time. Our experiments shown in Fig 5 and Rebuttal PDF Fig. 1 also show that the closed-loop control performance decreases with the model accuracy, using the same optimization tool.
We would also like to note that targeting to simultaneously achieve a high control performance during sparsification might necessitate a differentiable design for the control optimization procedure, which we will leave for future work.
> I think more methods than only MPPI should be investigated
We provided comparisons to MPPI, as well as model-free RL methods (PPO and SAC), all of which are widely used in the literature solving similar control and manipulation tasks. We also performed additional experiments using model-based RL (MBPO [Janner et al.]) on two OpenAI Gym benchmark environments (Cartpole-v1, Reacher-v4). The results are shown in Rebuttal PDF Fig. 1.
Our approach outperforms prior methods on the two standard RL benchmark environments. Notably, our approach achieved superior performance with highly sparsified neural dynamics models with fewer ReLUs compared to prior works.
> I do not understand why the learned dynamics model is not used as environment for the classical RL methods.
The neural dynamics model learned in our method is generic and not limited to only working with our planning framework. We took the learned full and sparsified dynamics models and trained a control policy with PPO interacting only with the learned dynamics model as suggested, and provide the experiment results below (also included in Rebuttal PDF Fig. 2).
| Num. ReLUs | 48 | 16 | 8 | 4 | 2 | 1 | 0 |
|:----------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| RMSE | 0.006291 | 0.007201 | 0.009670 | 0.018122 | 0.088279 | 0.088655 | 0.089807 |
The results above on Cartpole-v1 show that the neural dynamics models trained in our method can generalize and combine with another model-based control framework. As the model becomes progressively sparsified, the closed-loop control performance gracefully degrades. Our best performing model, MIP with 4 ReLUs, outperformed these models and achieved an RMSE of 0.003728.
> why are the interactions limited to a relatively small number with the real environment, when a large number of interactions with a potentially only slightly more inaccurate model are available?
We limited the number of interactions for training with the two model-free methods, PPO and SAC, to be the same as the number of real environment interactions that our method used to train the dynamics model. This setting is practical in scenarios where interacting with the real environment is costly or not always available, and is widely studied in the RL community. As the reviewer suggested, we conducted additional experiments using a model-based RL method to interact with the neural dynamics model to learn a control policy, reported above.
**Due to length limit, we are happy to respond to the remaining questions during the discussion phase.**
---
Rebuttal 2:
Title: Additional Author Responses to Reviewer LeuW
Comment: > Why would you reduce the number of neurons for MPPI in practice?
In the closed-loop control results in Fig. 5, we aimed to demonstrate that our method achieves comparable or better performance with a highly sparsified model, compared to MPPI using a full, non-sparsified model (rightmost column of each plot). We included the results of MPPI with slightly sparsified models to illustrate how a slightly sparsified model affects closed-loop control performance using the same optimizer, and to visualize the trend that a sweet spot exists where a reasonably accurate model can benefit from more powerful optimization tools, leading to superior closed-loop control performance.
> This similarly extends to the robot experiment in Sec. 4.3, where the sampling and the horizon rate of the MPC are not specified.
In the Object Pushing, Object Sorting, and Rope Manipulation tasks, we empirically find that using a horizon of one and re-optimizing at each step based on environmental feedback was sufficient to successfully complete a variety of tasks.
> it is not clear how fast the robot moves
We provided videos of our method controlling a real robot performing the tasks in the supplementary video. Since each action is standalone, there is no additional feedback from the environment while an action is being executed, so the speed of the robot does not affect the task completion.
> it would be interesting to see how many activation functions are needed for accurately learning highly nonlinear dynamics, e.g., cart-pole swing-up, and how this affects the control performance
We showed in Fig. 3a how the number of remaining ReLUs affects open-loop prediction accuracy of the learned dynamics model. For example, in Object Pushing, a model with 0 ReLUs (linear) failed drastically, while a model with 8 ReLUs was still reasonably accurate.
In the Reacher-v4 environment, we observe a gap where all models with 16 ReLUs or less fail significantly. In the Cartpole-v1 environment with simpler dynamics, neural dynamics models sparsified down to 1 or 2 ReLUs can still have reasonable performance, but when sparsifying down to 0 ReLUs (linear), we observe a significant drop in both prediction accuracy and closed-loop control performance.
Below we show the results on the closed-loop control performance of models sparsified to different numbers of ReLUs remaining on the Reacher-v4 environment.
| Num. ReLUs | 48 | 32 | 24 | 16 | 8 | 2 |
|:----------:|:-------:|:-------:|:--------:|:--------:|:--------:|:--------:|
| RMSE | 0.053467 | 0.064591 | 0.064659 | 0.113871 | 0.153998 | 0.170154 |
While the control performance gracefully degrades with the number of ReLUs in the beginning when the number of ReLUs is still sufficient to learn a reasonably accurate model, we start to observe significantly worse control performance once the number of ReLUs becomes insufficient to represent the complex dynamics of the system.
> What is the point of the regularization in (9) if you bound the total number of ReLu activation functions through $\epsilon$ in (5) anyways?
We indeed use $\epsilon$ as a cap on the number of ReLUs. The regularization term in (9) aims to discourage unnecessary use of ReLU and identity mappings so that the gradient-based optimization favors a model architecture that is more likely to perform well after we sparsify the model to fewer ReLUs in the next round. For example, we start with a full model of 512 ReLUs in the first iteration, and aim to sparsify down to 64 ReLUs after the first iteration. Because of the redundancy in neural networks, the model can likely achieve similar performance with all 512 ReLU activations, or with fewer ReLUs. If the model chooses to have 512 or significantly more than 64 ReLUs in the first iteration, then after we prune the network down to 64 ReLUs for the next iteration, this drastic change in modeling capacity could lead to noticeably larger decrease in prediction accuracy.
>What is a closed-loop planning performance? Or do you mean closed-loop control performance?
Yes, we apologize for the confusion.
>In (8), is $z_{ij}$ the approximation of $\pi_{ij}$?
Yes, $z_{ij}$ is the continuous softmax approximation of the discrete probability distribution $\pi_{ij}$.
> I think there are some limitations (e.g., computational complexity) that should be added.
Please refer to the global rebuttal response for discussions of limitations of our work.
**Have these responses addressed the reviewer’s concerns? We look forward to continuing the discussion.**
---
Rebuttal Comment 2.1:
Comment: Thank you for this detailed response. I appreciate the additional simulations and think they would significantly improve the paper. In particular, the usage of MIP indeed seems to provide a slight advantage to every other method. Despite these changes, one of my major concerns still remains: It is not clear how other methods perform when comparing them for the same computation time. This cannot be resolved by using the number of remaining neurons as an approximate measure for this since the methods' computation time depends differently on the number of neurons. Therefore, a plot showing computation time over performance is critical to demonstrate the advantages of the proposed method for closed-loop control, where the computation time defines the achievable sampling rate. Therefore, I currently do not intend to change my scores.
Follow-up question:
"Since each action is standalone, there is no additional feedback from the environment while an action is being executed"
$\rightarrow$ Does this mean you only use the model and MIP optimization to plan a trajectory? If so, this should be stated more clearly.
---
Reply to Comment 2.1.1:
Title: Additional Experiment Results Requested by Reviewer LeuW
Comment: Thank you for the follow up questions! We provide the experiment results requested and our response below.
> a plot showing computation time over performance is critical to demonstrate the advantages of the proposed method for closed-loop control
This is a great point. Following your suggestion, **we conducted additional experiments examining the tradeoff between** ***computation time*** **and** ***closed-loop control performance*** using the Reacher-v4 environment. We are unfortunately unable to submit plots in the discussion stage, so we present the results in the tables below.
The first row of each table denotes the mean computation time in seconds, and the second row denotes the closed-loop control performance measured by RMSE. The first table contains results using our method (MIP), and the second table contains results using MPPI.
Both methods achieved better closed-loop control performance when given more time to compute. **Our method (MIP) improved significantly in closed-loop control performance with more computation time**, leading to an superior RMSE of 0.04795 with 0.09716 seconds of computation. On the contrary, **the performance of MPPI plateaued with more computation time** and achieved an RMSE of 0.06572 with 0.100086 seconds of computation.
**MIP (Ours), 48 ReLUs**
| | | | | | |
|--|--|--|--|--|--|
| Time (s) | 0.01198 | 0.02257 | 0.04903 | 0.08007 | **0.09716** |
| RMSE | 0.20975 | 0.19609 | 0.17762 | 0.06916 | **0.04795** |
**MPPI, 192 ReLUs**
| | | | | | |
|--|--|--|--|--|--|
| Time (s) | 0.00791 | 0.01822 | 0.04686 | 0.08284 | 0.100086 |
| RMSE | 0.09273 | 0.07185 | 0.06868 | 0.06693 | 0.06572 |
**This comparison further supports that our method, using a sparsified dynamics model, can benefit from more powerful optimization tools, and lead to superior closed-loop control performance under the same computation time compared to MPPI using a full dynamics model.**
> "Since each action is standalone…" Does this mean you only use the model and MIP optimization to plan a trajectory? If so, this should be stated more clearly.
**Our method performs closed-loop control by incorporating environment feedback between actions, but not during an action execution.** For example, in the Object Sorting tasks, after a push action has been completed, we use the new environment state observed to optimize for the next action, but we do not account for environment feedback while the robot is executing a single push action from start to end. We apologize for the confusion.
We hope that our additional experiments and responses addressed the reviewer’s questions and concerns. **We would be happy to continue the discussion if the reviewer has additional questions or concerns.** | Summary: This paper proposes a new framework for model-based control. The approach focuses on learning a sparse deep neural network and using a mixed-integer program solver for closed-loop planning. Experimental results are presented on several tasks including object and rope manipulation tasks. The results show that the proposed approach improves performance over strong baselines.
Strengths: - The framework proposed in this paper elegantly combines concepts from deep neural network (DNN) pruning and mixed integer programming (MIP) into a solution for model-based control.
- The specific technique for sparsifying a DNN by removing non-linearities rather than simply dropping nodes intuitively matches the goals of eventually using the model with a MIP solver.
- The experiments demonstrate that the approach is effective in both simulation and in the real-world on real hardware -- providing strong evidence that this is a generally applicable approach.
- The technique is demonstrated with both MLP and GNN based models, showing the versatility of the proposed approach.
- The supplemental video presents strong qualitative evidence supporting the efficacy of the approach on real-robots.
- The paper is very well written.
Weaknesses: - It is unclear if the performance gains in Figure 5 are significant. On the Object Pushing and Rope Manipulation tasks MIP does not appear to outperform MPPI. And the improvements of MIP on the Object Sorting tasks appear to be within the error bars.
- A key ablation is missing. One of the main claims in this work is that the proposed sparsification technique, which focuses on removing nonlinearities rather than neurons, improves performance. Only a partial ablation in support of this claim is provided in the appendix A2. Specifically, A2 shows that the proposed approach leads to lower prediction error. However, to quote L277-279 in the main paper, “what we really care about is the performance when executing optimized plans in the original simulator or the real world. Therefore, it is crucial to evaluate the effectiveness of these models within a closed-loop control framework.” I agree. Thus, it is similarly crucial to perform this ablation in a closed-loop setting.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: I am willing to adjust my rating if these questions can be answered:
- Please comment further on the results in Figure 5; there is a trend but why is this a significant advance?
- Does the proposed sparsification technique improve closed-loop performance?
- What are the limitations of the work?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No, the paper does not discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper, and for your insightful feedback that helped improve our work.
> Please comment further on the results in Figure 5; there is a trend but why is this a significant advance?
In both Object Sorting tasks, our method using MIP and a sparsified model of 24 and 36 ReLUs demonstrate superior performance compared to the method using MPPI and a full model with 512 ReLUs, indicated by the lower interquartile range and median, supporting our statement that having a sparsified model allows us to benefit from more powerful optimization tools, leading to superior closed-loop control results despite having a less accurate dynamics model.
In Object Pushing, MIP with a model of 60 ReLUs achieved comparable performance to MPPI with a model of 768 ReLUs, and slightly better performance compared to MPPI with 200 ReLUs.
We agree that MIP achieves a similar performance to MPPI on Rope Manipulation. There is a more significant gap in model prediction error between the full dynamics model and highly sparsified models due to the complex physical properties of the deformable rope (Fig. 3a). The task requires accurately manipulating the rope to match a target shape specified with key points along the rope, thus the performance is highly dependent on having an accurate dynamics model. Even though using a sparsified model enabled us to leverage a more powerful optimization tool giving better results in open-loop optimization (Fig. 3b), the model prediction error limits the performance upper bound in closed-loop planning. Future improvements could involve the co-optimization of the model sparsification procedure with control synthesis. Enabling the sparsification process to be mindful of the subsequent control task may lead to even better performance
> Does the proposed sparsification technique improve closed-loop performance?
The reviewer pointed out that it is crucial to perform an additional ablation study of our proposed sparsification technique in a closed-loop setting, as we also stated in the paper. To that end, we performed additional experiments to validate the performance again on closed-loop control of our proposed method, and provide the results below (also in Rebuttal PDF Fig. 3 right).
| Num. ReLUs | 36 | 24 | 18 | 15 |
|:----------:|:--------:|:--------:|:--------:|:--------:|
| Ours | 0.172873 | 0.192272 | 0.241469 | 0.271934 |
| Li et al. [1] | 0.186144 | 0.239412 | 0.321603 | 0.287213 |
The numbers in each column correspond to the closed-loop control performance, measured by RMSE, using the sparsified model with the corresponding number of ReLUs.
These ablation results, combined with results reported in supplemental material Fig. 2, demonstrate that our proposed sparsification scheme brings significant improvements in both open-loop prediction and closed-loop control performance.
> What are the limitations of the work?
Please refer to the global rebuttal response for discussions of limitations of our work.
**Have these responses addressed the reviewer’s concerns? We look forward to continuing the discussion.**
[1] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, ‘Pruning Filters for Efficient ConvNets’, CoRR, vol. abs/1608.08710, 2016.
---
Rebuttal 2:
Title: Are there any additional questions or concerns?
Comment: We hope that our additional experiments following the reviewer’s suggestions, discussion of limitations, and responses addressed the questions and concerns raised. **We would be happy to continue the discussion if the reviewer has additional questions or concerns.**
---
Rebuttal Comment 2.1:
Title: Questions resolved
Comment: Thank you for the detailed response. The closed-loop performance is encouraging and I believe it will strengthen the paper. Additionally, I appreciate the discussion of limitations in the main response. I do not have any remaining questions and I have updated my score. | Summary: This paper proposes a framework for model-based planning with forward dynamics represented as sparse neural networks. The paper examines different ways of inducing sparsity in MLP and GNN based forward models, and performs real robot manipulation experiments investigating the tradeoffs with sparsity and performance.
Strengths: - the paper targets an interesting problem of learning efficient world models for control, by reducing optimization costs in learning the forward model. The specific approach of investigating how sparsity can enable this efficient learning of the forward model for real robot manipulation scenarios, is novel in my understanding.
- the paper is easy to follow, with detailed descriptions of the different ways of inducing sparsity, and the different architectures. The research questions investigated are well formulated, and adequately addressed in the experiments
- the experiments are on interesting real robot manipulation tasks like object sorting, and rope manipulation, so the findings are amenable to be deployed in real world control scenarios.
Weaknesses: - the main weakness of the paper is that there aren't comparisons to prior model-based RL approaches, e.g. PETS, Dreamer, MBPO etc (https://arxiv.org/abs/1805.12114,https://arxiv.org/abs/1912.01603, https://arxiv.org/abs/1906.08253) . If it is difficult to compare them on hardware experiments, there should be comparisons done on simulated robot control tasks.
- it is unclear how general are the findings in terms of being applied to different model-based control frameworks. There should be a disucssion (and possibly experiments) about if we can take any prior model-based RL algorithm and replace the forward model with a sparse neural network, and see benefits under certain assumptions - if not, then a discussion of what exactly are the requirements on how the model should be learning and how a policy (or plan) should be learned is needed.
- the experiments results in my understanding are on low-dimensional states instead of images. Is there a reason for not evaluating with image observations? Encoding image observations into latent states, and then doing policy learning is a common practice in the community now - so it will be helpful to have a discussion regarding this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to my list of weaknesses above. In summary
- comparisons to prior MBRL approaches?
- discussion about generality of the findings?
- feasibility of experiments with image observations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are a bit unclear at the moment (also refer to my list of weaknesses). It will be helpful to provide a detailed discussion of the limitations and generality of the findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper, and for your thoughtful comments and suggestions that have contributed to the refinement of our work.
> there aren't comparisons to prior model-based RL approaches, e.g. PETS, Dreamer, MBPO etc
We conducted additional experiments employing two model-based RL methods, 1) our learned full neural dynamics model with PPO to learn a control policy, and 2) MBPO [4] learning a model and a policy from scratch, on two OpenAI Gym [3] benchmark environments, Cartpole-v1 and Reacher-v4. Below are the results (also included in Rebuttal PDF Fig. 1)
| | Ours | MPPI | Model-free PPO | PPO + Our model | MBPO [4] |
|-------------|:-----------------------:|:---------------:|:--------------:|:---------------:|:---------------:|
| Cartpole-v1 | **0.003728** (4 ReLUs) | 0.003735 (full) | 0.039222 | 0.006291 (full) | 0.004268 (full) |
| Reacher-v4 | **0.053467** (48 ReLUs) | 0.064792 (full) | 0.237225 | 0.108016 (full) | 0.110031(full) |
Each number represents the median of RMSE over 100 trials. The parenthesis after the number denotes the number of ReLUs used in the neural dynamics model.
The model-based RL methods require additional time to train a policy using the learned dynamics model, whereas our approach directly optimizes a task objective over the dynamics model without needing additional training.
Overall, our experiment results demonstrate that our method using sparsified neural dynamics models with fewer ReLUs can be applied to a wide variety of tasks and exhibit better performance compared to prior methods examined.
> it is unclear how general are the findings in terms of being applied to different model-based control frameworks
The neural dynamics model learned in our method is generic and not limited to only working with our planning framework. We took the learned full and sparsified dynamics models and trained a control policy with PPO interacting only with the learned model as suggested, and provide the experiment results below (also included in Rebuttal PDF Fig. 2).
| Num. ReLUs | 48 | 16 | 8 | 4 | 2 | 1 | 0 |
|:----------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| RMSE | 0.006291 | 0.007201 | 0.009670 | 0.018122 | 0.088279 | 0.088655 | 0.089807 |
The results above on Cartpole-v1 show that the neural dynamics models trained in our method can generalize and combine with another model-based control framework. As the model becomes progressively sparsified, the closed-loop control performance gracefully degrades.
> Is there a reason for not evaluating with image observations?
Our method is agnostic to model architectures and can be applied to incorporate image observations encoded as latent states as the reviewer suggested, as long as the encoder is a ReLU network. For the specific real-world robotic manipulation tasks we considered in this paper, prior works [1, 2] showed that using low-dimensional structured representations provides stronger generalization capabilities compared to pixel-based dynamics, especially for compositional systems containing varying numbers of objects (as is the case in our Object Sorting tasks). That’s why we chose to not evaluate with image observations in our experiments.
> It will be helpful to provide a detailed discussion of the limitations and generality of the findings.
Please refer to the global rebuttal response for discussions of limitations and generality of our work.
**Have these responses addressed the reviewer’s concerns? We look forward to continuing the discussion.**
[1] N. Watters, A. Tacchetti, T. Weber, R. Pascanu, P. W. Battaglia, and D. Zoran, ‘Visual Interaction Networks’, CoRR, vol. abs/1706.01433, 2017.
[2] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint, ‘Learning Multi-Object Dynamics with Compositional Neural Radiance Fields’, in Conference on Robot Learning, 2022.
[3] G. Brockman et al., ‘OpenAI Gym’, CoRR, vol. abs/1606.01540, 2016.
[4] M. Janner, J. Fu, M. Zhang, and S. Levine, ‘When to Trust Your Model: Model-Based Policy Optimization’, CoRR, vol. abs/1906.08253, 2019.
---
Rebuttal Comment 1.1:
Title: response to rebuttal
Comment: Dear authors,
Thanks for the response and the additional experiments. The comparison to model-based baselines provided is helpful. I am still not convinced by the applicability to image based observations, which is important in order to be able to tackle more realistic tasks with less assumptions on the observed states. However, I am still leaning towards accept for the paper, as my other two concerns have been partially resolved.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer H7kE,
Thank you again for dedicating your time and effort in providing a thorough review of our paper. We deeply appreciate your constructive feedback and thoughtful comments that have helped us improve our work. | Summary: This paper focused on the combination of predictive control and model learning. An autogressive dynamic model based on a ReLU neural network is first learned over the observation space. The authors then aimed to sparsify it after introducing the indicator mapping function. To make the optimization feasible, the Gumbel-Softmax trick is applied to replace the greedy operation. Once the sparse dynamic is learned, mixed-integer programming solvers are used to obtain the control policy. Finally, the authors validated the proposed method across a few tasks and showed its promising performance.
Strengths: 1. The proposed framework of sparse neural dynamics in predictive control looks interesting. The use of Gumbel-Softmax to reparameterizing the original discrte optimization makes the gradient based methods feasible, and is indeed a reasonable idea.
2. The optimization for the control part is more efficient, compared with the gradient based methods, which constitutes another contribution of this paper.
Weaknesses: Regarding the experimental comparison, it would be more convincing if the authors can test on commonly used reinforcement learning benchmarks. The only comparison regarding control policy is from Figure 3 (b) and it only involves different optimization solvers. The current results would leave the impression that the proposed method may overfit on these tasks only.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What's the complexity of applying MIP?
2. What's the underlying architecture for the ReLu neural network? It would be a bit surprising if a feedforward neural network can capture the features from image inputs.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing our paper, and for your insightful suggestions that have improved our work.
> It would be more convincing if the authors can test on commonly used reinforcement learning benchmarks
We conducted further experiments on two additional environments, Reacher-v4 and Cartpole-v1, from OpenAI Gym [3], and reported the results below (also in Rebuttal PDF Fig. 1)
| | Ours | MPPI | Model-free PPO | PPO + Our model | MBPO [4] |
|-------------|:-----------------------:|:---------------:|:--------------:|:---------------:|:---------------:|
| Cartpole-v1 | **0.003728** (4 ReLUs) | 0.003735 (full) | 0.039222 | 0.006291 (full) | 0.004268 (full) |
| Reacher-v4 | **0.053467** (48 ReLUs) | 0.064792 (full) | 0.237225 | 0.108016 (full) | 0.110031(full) |
Each number represents the median of RMSE over 100 trials. Parentheses denote either the method used a full model or a sparsified model with specified number of ReLUs.
As the table illustrates, our approach outperforms prior methods on the two standard RL benchmark environments. Notably, our approach achieved superior performance with highly sparsified neural dynamics models with fewer ReLUs compared to prior works.
> The current results would leave the impression that the proposed method may overfit on these tasks only.
Our experiments covered a variety of tasks, involving rigid and deformable objects in scenarios ranging from single to multi-objects, applied to dynamics models instantiated using feed-forward neural networks and graph-neural networks. These experiments demonstrated the applicability and combinatorial generalizability of our method. We hope that the results on the two additional tasks further substantiate that our method is applicable to a wide range of task settings.
> What's the complexity of applying MIP?
Solving MIPs is NP-hard, so in the worst case, the solve time might be exponential in the number of ReLUs. However, there exist highly optimized solvers like Gurobi that can solve medium-size MIPs very quickly in practice. The solution of a MIP can also be terminated early, and the branch and bound solver will return a solution together with an upper bound on the distance of this solution from the global minimum. This strategy can be leveraged to further reduce the runtimes, while still providing strong optimality guarantees. We provide details on various strategies in the literature to accelerate solving MIPs in Section 3.5.1 of the paper.
> What's the underlying architecture for the ReLu neural network? It would be a bit surprising if a feedforward neural network can capture the features from image inputs.
Below are the architecture details (also provided in supplemental material, section B). The ordered lists represent the number of ReLUs in each layer of the network.
- PWA functions: 96, 192, 192, 96
- Object pushing: 256, 256, 256
- Object sorting: graph neural network similar to Sanchez-Gonzales et al. [5] with 64 hidden units per layer, with a total of 512 ReLUs
- Rope manipulation: 256, 256, 256
- Cartpole-v1: 16, 16, 16
- Reacher-v4: 64, 64, 64
We used either keypoint (object pushing, rope manipulation), object-centric representation (object sorting), or simulation state (cartpole, reacher) as our state representations. Prior works [1, 2] demonstrated that low-dimensional structured representations have superior generalization capabilities compared to pixel-based dynamics, especially for compositional systems containing varying numbers of objects (as is the case in our Object Sorting tasks).
**Have these responses addressed the reviewer’s concerns? We look forward to continuing the discussion.**
[1] N. Watters, A. Tacchetti, T. Weber, R. Pascanu, P. W. Battaglia, and D. Zoran, ‘Visual Interaction Networks’, CoRR, vol. abs/1706.01433, 2017.
[2] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint, ‘Learning Multi-Object Dynamics with Compositional Neural Radiance Fields’, in Conference on Robot Learning, 2022.
[3] G. Brockman et al., ‘OpenAI Gym’, CoRR, vol. abs/1606.01540, 2016.
[4] M. Janner, J. Fu, M. Zhang, and S. Levine, ‘When to Trust Your Model: Model-Based Policy Optimization’, CoRR, vol. abs/1906.08253, 2019.
[5] A. Sanchez-Gonzalez et al., ‘Graph networks as learnable physics engines for inference and control’, CoRR, vol. abs/1806.01242, 2018.
---
Rebuttal Comment 1.1:
Title: Are there any additional questions or concerns?
Comment: We hope that our additional experiments following the reviewer’s suggestions, discussion of limitations, and responses addressed the questions and concerns raised. **We would be happy to continue the discussion if the reviewer has additional questions or concerns.**
---
Rebuttal Comment 1.2:
Comment: Thanks to the authors for addressing my concern and questions. Regarding the additional experiments, could you also add (i) the accumulated rewards (ii) standard deviation, as in previous papers? After the authors' response, the strength of this submission is more clear to me. On the other hand, I also see the concern from the other reviewers on the experimental significance and extension to other complex domain, which seems a bit challenging given the current status.
---
Reply to Comment 1.2.1:
Comment: Thank you for the follow-up questions! We are glad to hear that our response addressed your concerns and questions. We respond to the additional questions below.
> could you also add (i) the accumulated rewards (ii) standard deviation, as in previous papers?
In Figure 1 of the Rebuttal PDF, we included the performance comparison of our approach against prior methods measured by RMSE, which in our setting is the additive inverse of the episode return averaged over steps. We included the interquartile range, minimum, and maximum as part of the box plot. These results further demonstrate that our approach achieved superior closed-loop control performance with sparsified dynamics models compared to prior works using full dynamics models.
> I also see the concern from the other reviewers on the experimental significance and extension to other complex domain…
We hope to highlight that **our approach achieved superior closed-loop control performance on the Object Sorting, Object Pushing, Cartpole, and Reacher tasks using highly sparsified neural dynamics models with fewer ReLUs, compared to model-free RL and prior methods using full dynamics models.**
Only on the Rope Manipulation task, MIP (ours) achieves a similar performance to MPPI. The more significant gap in model prediction error between the full dynamics model and highly sparsified models limits the performance upper bound in closed-loop control, even though our approach using sparsified models enabled us to leverage a more powerful optimization tool giving better results in open-loop optimization. Future improvements could involve the co-optimization of the model sparsification procedure with control synthesis. Enabling the sparsification process to be mindful of the subsequent control task may lead to even better performance.
Regarding the potential of extending our method to other complex domains, we would like to emphasize that **our method is generic and applicable to a wide variety of dynamics models instantiated using ReLU networks, including but not limited to feed-forward neural networks and graph-neural networks with compositional architectures.** Our experiments also showed **competitive performance and combinatorial generalizability on a wide range of tasks involving rigid and deformable objects in scenarios ranging from single to multiple objects.** The experiments on the two OpenAI Gym environments further demonstrated the generalizability of our method.
**We wish to again express our heartfelt thanks for your helpful feedback and suggestions that helped us improve our paper. We would be happy to continue the discussion if the reviewer has additional questions or concerns.** | Rebuttal 1:
Rebuttal: We thank the reviewers for dedicating their time and effort in reviewing our paper, and we deeply appreciate their thoughtful comments and insightful feedback. We appreciate the reviewers agreeing that our approach is novel, well formulated, tackles an interesting problem, and that our paper is well written.
> **Additional Experiments**
We have conducted additional experiments as requested by the reviewers and have included the results in the Rebuttal PDF, including:
1. Experiments involving two environments from OpenAI Gym (Cartpole-v1, Reacher-v4), to further demonstrate the generalizability of our method to a variety of tasks. (Reviewer h4y2)
2. Comparisons to an additional prior model-based RL method (MBPO), to showcase the superior performance of our method using a sparsified dynamics model compared to prior methods using a full dynamics model. (Reviewers H7kE, LeuW)
3. Taking dynamics models trained and sparsified through our method to train a control policy with PPO, demonstrating that our sparsification scheme produces dynamics models that are general and can be effectively combined with prior model-based RL methods. (Reviewers H7kE, LeuW)
4. Closed-loop control performance of dynamics models sparsified using our approach compared to models sparsified using a prior method, showing that our sparsification approach generates dynamics models with lower prediction errors and better closed-loop control performance. (Reviewer fLUK)
5. Experiments examining the positive correlation between computation complexity and the number of remaining ReLUs in the dynamics model, to further support using remaining ReLUs as an effective proxy for computation complexity. (Reviewers h4y2, LeuW)
> **Applicability of the Proposed Method**
Following the comments from Reviewers h4y2, H7kE, and LeuW about the applicability of our method, we hope to highlight that our method is generic and applicable to a wide variety of dynamics models instantiated using ReLU networks including but not limited to feed-forward neural networks and graph-neural networks examined in this paper. Our experiments also showed competitive performance and combinatorial generalizability on a wide range of tasks involving rigid and deformable objects in scenarios ranging from single to multi-objects.
> **Limitations**
In response to reviewers’ requests, we provide more details on the limitations of our approach below. We will include these discussions in the revised version of our paper.
Our method relies on sparsifying neural dynamics models to fewer ReLU units to make the control optimization process solvable in a reasonable time due to the worst-case exponential run time of MIP solvers. Although our experiments showed that this already enabled us to complete a wide variety of tasks, our approach may struggle when facing a much larger neural dynamics model. We provided a review of a variety of strategies in the literature to accelerate the process of finding a solution for our MIPs in section 3.5.1 of the paper.
Our experiments also demonstrated superior closed-loop control performance using sparsified dynamics models with only reasonably good prediction accuracy as a result of benefiting from stronger optimization tools, but our approach may suffer if the sparsified dynamics model becomes significantly worse and incapable of providing useful forward predictions.
In terms of generality, our method naturally applies to dynamics models represented by ReLU networks with diverse model architectures (MLP, GNN, etc.), and showed superior performance on a variety of tasks involving rigid and deformable objects in single to multi-object scenarios, demonstrating the applicability and combinatorial generalizability of our method.
**We look forward to discussing further with the reviewers during the discussion phase.**
Pdf: /pdf/7bc1cd25a80e95dac629e752cf84c82288ccc935.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Offline Primal-Dual Reinforcement Learning for Linear MDPs | Reject | Summary: This paper studied offline RL in linear MDP setting, where the transition and reward have low-rank structures and the feature $\phi$ is known. The authors formulated the problem in a primal-dual way and proposed a gradient-based algorithm. They provided convergence guarantees, which only requires coverage over optimal policy.
Strengths: The paper writing is clear and easy to follow.
The discussion and comparison with previous works is very detailed.
The algorithm is computationally efficient. The algorithm design has some interesting points, especially the reparameterization design to avoid knowlegde of $\Lambda^{-1}$ and updates for variables $v$ and $u$.
The coverage assumption seems weaker than previous literatures.
Weaknesses: I didn't see too much technical novelty in the method and proof.
The setting is linear MDP, which is kind of restrictive.
Convergance rate is kind of far away from optimal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Given the knowledge of $\Lambda$, why not consider the reparameterization $\beta = \lambda$, i.e., set $c=0$, but consider $c=1/2$? It seems to me the RHS of Eq. in Line 141 can be estimated unbiasedly?
Line 154 claimed that $c = 1/2$ results in a tighter bound, however, according to Theorem 3.2, and Corollary 3.3, the larger $c$ is, the lower dependence on $d$ the sample complexity will have. So how the "tighter bound" is reflected? It seems contradictory.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback on our work. In response, we kindly highlight that the major technical novelty of our work is a reparametrization trick with which we adapt the conventional LP formulation to a novel framework for offline learning via primal-dual optimization. This allows us to derive low-variance gradient estimators that can be used in a stochastic optimization scheme using only samples from the behavior policy. Furthermore, our proof technique uses arguments from Neu et. al (2022), but tailored to this novel offline learning framework. Along the way, we also utilize a notion of coverage - coverage in a single direction in feature space, which at least for $c=1/2$ is provably smaller than other standard coverage conditions. As we mention in our general response, these findings are novel in LP based methods for offline RL, and are quite significant in our opinion.
While we strongly believe that our work presents novel ideas which are worth sharing, we agree that there is room for improvement.
Regarding our choice of $c$, yes with $c=0$ we can still estimate the gradients unbiasedly provided we know $\Lambda$. However, the variance of the resulting gradient estimators is prohibitively large, which leads to a rather weak regret bound that scales with the inverse of the smallest eigenvalue of Lambda. In other words, one has to assume a form of uniform coverage condition in the feature space, which is much much stronger than the conditions that our estimators require. Though other choices may lead to interesting results, our guarantees are to be understood for $c \in {1/2, 1}$. That said, we leave the question of looking into other values for future work, and will mention this in the final version.
Finally, in line 154 of our work, by "tighter bound", we actually refer to the dependence on the coverage ratio. For example, in the tabular setting with $\Phi$ as the identity matrix, the coverage ratio when $c=1/2$ is $C_{\phi,1/2}(\pi^*;\pi_{B})=\sum_{x,a}\frac{\mu(x,a)^2}{\mu_B(x,a)}$
while for $c=1$ it becomes $C_{\phi,1}(\pi^*;\pi_{B})=\sum_{x,a}\frac{\mu^{*}(x,a)^{2}}{\mu_{B}(x,a)^{2}}$ acquiring a factor of $1/\mu_{B}(x,a)$ in the sum which can be large. Thanks for noticing this, we are going to make our statement more clear in the final draft.
---
Rebuttal Comment 1.1:
Title: Acknowledged
Comment: Thanks for the response. I think my questions are addressed and I will take the response into consideration in the final decision period.
---
Reply to Comment 1.1.1:
Comment: Thank you, we are glad we could be of help and remain available in case you have more questions. | Summary: This paper proposed an primal-dual framework for offline reinforcement learning in linear MDP Contrary to the more common case of finite horizon, they considered the case of infinite horizon with discounted reward. They reduced the problem of offline reinforcement learning to a problem about solving the saddle-point of a Lagrange form. They designed an algorithm which uses stochastic gradient-based optimization to solve the saddle point. They provide a sample complexity of O(\eps^-4) for both cases of discounted MDP and averaged-reward MDP, and their algorithm is also computational efficient.
To summarize, the formulation of offline RL into a linear programming problem is very interesting. The proof seems very solid, and I like the comparison for the concentrability constant in the last discussion section. The comparison for the constant C is thorough and very good.
However, I still have some questions about some details in the main text.
Strengths: 1. The formulation of offline reinforcement learning to a linear programming problem is very good.
2. The algorithm is clearly motivated by solving the saddle points of a Lagrange form. The algorithm itself is simple and computationally efficient, with a guaranteed sample complexity for both discounted MDP and averaged-reward MDP.
3. They proposed a new concentrability constant C and compare it to other constants appearing in other literatures about offline RL. I think the understanding of the relationship of these concentrability constant is basically correct and very clearly expressed.
4. The proof seems very solid and the result in averaged-reward case is new.
Weaknesses: 1. I have some question about your comparison to previous results. Your main references are Cheng et al and Xie et al.
1.1 For Xie et al, the Theorem 3.2 in https://proceedings.neurips.cc/paper_files/paper/2021/file/34f98c7c5d7063181da890ea8d25265a-Paper.pdf implies that their sample complexity is O(1/\eps^2) when applied in linear function approximation. This result is based on assumption3 in their paper. This assumption naturally holds in your paper since you consider linear MDP and they consider the case of 'linear function approximation' (for their difference, see point 2). So it is natural for you to compare your sample complexity to this result, not the O(1/\eps^5) one. [notice that, their algorithm in section 3 is computationally inefficient]
1.2 In Theorem 4.1 in Xie's paper, their sample complexity is O(1/\eps^5) when applied on general function approximation, and O(1/\eps^3) when reduced to linear function approximation case (see paragraph 'Dependence oon T'). Again, their assumption for linear function approximation holds in your case. **This algorithm, however, is computationally efficient.** So you should also compare with this alg with O(1/\eps^3) sample complexity.
1.3 In Cheng's paper, in theorem 5, their sample complexity seems to be O(1/\eps^3), not O(1/\eps^5). I wonder how you derive their sample complexity in Table one.
1.4 I am not sure how you get the O(n^{7/5}) computational complexity for Xie's paper. Could you derive it in more detail?
2. I think in many places in your paper, you confuse the two terms: linear MDP and linear function approximation. Your case is called linear MDP instead of linear function approximation, so I suggest you changing the wrong terms. For reference, both Xie's paper and Cheng's paper consider the 'linear function approximation' case.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In Line 82 and 87, there are typos in the definition of rho and state occupancy measure? There should be a power of gamma in the sum?
2. In line 174-176, you calculate the computational complexity of your algorithm. Can you show this in more detail? I am not sure why each gradient update needs a constant number of elementary vector operation. What is your definition for elementary vector operation?
3. In section 3 and 4, you use c \in {1/2, 1} to state all your results. I wonder whether c can only take these two values? If so, then why cannot c take other values (for example, 2 or 3 or 1/4)? If not, then how will the algorithm behave and how will the sample complexity be when we take c to be other values? If c can take multiple values besides 1/2 and 1, how can you tune c to get the best performance?
4. In theorem 3.2, the sample complexity depend on constant c by D_\phi^{8c} * d^{-2c}, right? Does the constant hidden by the big O depend on c or nor? If not, then suppose D_\phi^{8} / d^2 < 1, if we take c to be large or even converge to the infinify, then the term inside the big O will approach zero. How can that happen? (Let's suppose D)\beta is a constant and does not depend on c). For example, in the most common setting, c = 1 and d > 1, then when we take c \to \infty, the term in the bracket wiil be zero. If the term hidden by the big O depends on constant c, then you should point it out in the paper.
5. In Line 198, you say D_\beta = c * C for some positive universal constant c. Is this c multiplied before the concentrabilty coefficient the same as that c in definition 3.1? If not, then you should use another notation.
6. In line 315, you claim C_{\phi,1} = 1 + chi-square divergence. However, C_{\phi,1} is \sum (\mu^*)^2 / (\mu_B)^2 and the chi-square divergence should be \sum (\mu^*)^2 / (\mu_B), right? So they should not be equal. For the definition of chi-square divergence, see https://en.wikipedia.org/wiki/F-divergence.
7. In line 77, you claim the state space is finite but potentially very large; but in Line 338, you claim that 'not to assume the state space to be finite'. From the proof, I think the latter one is correct, and the first one is just for deriving some equations like equation 1 and 2. So I think you should write somewhere in section 3 that you relax this setting of finite state space and the derivation there also holds for infinite state space but finite action space.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the very careful reading of our paper.
Indeed, the works you reference provide many different algorithm variations and sample complexity bounds. Our comparison could definitely be improved a bit. We plan to fix this in the final draft of our paper, by adding to the appendix a more detailed explanation.
Most of the algorithms provided by Xie et al make use of computational oracles with an unspecified complexity, which makes it hard to categorize these methods as computationally efficient. In particular, in their Section 4 they present a "practical algorithm" and prove a bound of $O(\varepsilon^{-5})$ for it. However, this algorithm still makes use of an optimization oracle to solve an inherently nonconvex problem (which may be hard in general). They also claim that when reduced to the linear MDP setting the algorithm achieves $O(\varepsilon^{-3})$. We find this claim believable, but have to point out that no proof is offered in the paper. Working out the details may be possible, but we can see several nontrivial challenges. For instance, the concrete implementation of their algorithm in Appendix D does not match the specifications needed in the analysis (e.g., it uses an unbounded function class which prevents using a covering-number argument in the general analysis). In any case, we will point out their improved rate in the final version of our paper, but also mention these caveats.
Regarding Cheng et al, a similar argument can be made. They also provide an oracle-based algorithm with strong theoretical guarantees and a practical implementation for which they make (or prove) no explicit claims.
Thank you for pointing out these details, which made us dive deeper into these related works. We also plan to update our table accordingly.
Regarding the computational complexity for Xie et al (Appendix D). They need to compute the matrices B and C, which depend on the current policy, and therefore need to be recomputed at each iteration. Computing these matrices requires iterating over the whole dataset, and thus $O(n)$ time at each iteration. Finally, the total number of iterations is set to $T=O(n^{2/5})$, giving a total of $O(n^{7/5})$ run-time cost.
We thank you for the question. We plan to add a paragraph to the appendix about these calculations, to improve the clarity of our paper.
Regarding Linear MDP and linear function approximation. "Linear MDP" refers to properties of the problem, "function approximation" of the solution. They are often used interchangeably in the literature because a linear-MDP assumption allows for sample-efficient learning using linear functions to represent action-value functions. In the setting we consider, both expressions are appropriate, and there should not be any confusion: the linear-MDP assumption is always assumed to hold *and* our action-value functions are always linear.
## Questions
1. Yes, you are right. Thank you for noticing this!
2. Thank you for bringing this up, we will try to explain this more clearly in the final draft.
Our algorithm as presented in the paper makes use of elementary operations between vectors, such as sums, inner products, entry-wise, matrix-vector products etc. These operations may have a run-time cost proportional to the dimension of the vectors (which in turn is proportional to $d$), however they do not depend on $n$. The algorithm is made of one external loop which performs $T$ iterations, and one inner loop which each time does $K-1$ iterations. Each of the loops uses one sample per iteration and performs a constant number -- with respect to $n$ -- of elementary vector operations, resulting in $O(1)$ run-time iteration complexity with respect to $n$. Therefore, unrolling the inner loop, yields a cost of $O(K)$ per outer iteration. Multiplying that by the number of iterations of the outer loop we obtain a total cost of $O(T K)= O(n)$. To keep our answer simple we only considered the dependence to $n$, ignoring $d$, $|\mathcal{A}|$ and other smaller quantities.
3. The complete version of our results (shown in Theorem A.1 and C.1) are stated for a general c, and can give some insight on how the sample complexity changes for different values of c. However, we have not tried to fully optimize this. Rather, we choose to focus on intuitive choices of c. Interestingly, with $c=1/2$, we have shown that time and sample efficient offline learning is feasible in the discounted and average reward settings under a conceptually weaker notion of coverage (please see our general response). On the other hand, with $c=1$, we provided guarantees under similar conditions without knowledge of the feature covariance under the behaviour policy. We are more than convinced that these findings are relevant in their own right and leave further work such as tuning $c$ for future work.
4. Yes, the sample complexity in theorem 3.2 depends on $c$ by $D_\phi^{8c}d^{-2c}$ and also on $D_{\beta}$ since we require that $D_{\beta}\geq C_{\phi,c}(\pi*;\pi_{B})$. However, the simplified upper bound presented in the main text is **only valid for $c\in\{1/2,1\}$**. To get the full picture on how different values of $c$ impact the regret, we kindly refer the reviewer to the complete statement of the Theorem in Appendix A, that works for general $c$.
5. No, the c in Line 198 is a numerical constant that has noting to do with the parameter c from Def. 3.1. Thank you for pointing this out.
6. Yes, that is a typo, thank you for finding it! It is $C_{\phi,1/2}$ that is equal to the chi-square divergence, as you can check using the definition you provided.
7. Indeed, it is how you suggest. We consider finite (but potentially very large) state space for simplicity, but all the derivations extend to infinite state spaces. On the other hand, the finite action space is actually needed, for computational reasons, since we employ softmax policies and compute sums over actions in the algorithm. We will explain this better in the final version.
---
Rebuttal Comment 1.1:
Title: Comparison to related work
Comment: Dear authors,
I see several reviewers point out concerns related to comparison to related works and also see your responses. We will have to gather everyone's opinion later after they have a chance of reading your rebuttal and each other's review, but just a very quick point regarding your comment:
> the concrete implementation of their algorithm in Appendix D does not match the specifications needed in the analysis (e.g., it uses an unbounded function class which prevents using a covering-number argument in the general analysis)
You are right the analysis is incomplete, but I think the computational efficiency is still quite obvious? The objective is just maximizing quadratic function with PSD Hessian (_edit:_ up to a minus sign) thus minimizing a convex function. You are totally right that they use an unbounded function class in the computational argument --- but adding (say) a norm constraint on the parameter is quite straightforward? Detailed computational complexity may be difficult to obtain, but broadly speaking I believe polynomial complexity will hold without issues. So at least it should be an entry in your table with very strong sample complexity whereas the detailed computational complexity is less clear (but polynomial).
---
Reply to Comment 1.1.1:
Comment:
Thank you for the comment. We find your observation totally reasonable. We will make it more clear in the text and the table, that PSPI, when reduced to the linear MDP setting, achieves $O(\varepsilon^{-3})$ and has polynomial computational complexity.
---
Rebuttal Comment 1.2:
Comment: Thanks the authors for your reply!
Given your clarification on the related work, I will raise my score to six. | Summary: This paper studies offline reinforcement learning (RL) with linear function approximation and partial data coverage. The authors propose a primal-dual optimization method based on the linear programming (LP) formulation of RL. They prove a $O(\epsilon^{-4})$ sample complexity in both discounted setting and average-reward setting.
Strengths: 1. The algorithm proposed in this paper only requires near-minimal dataset coverage assumption, which is important in offline RL.
2. The paper also considers average-reward offline RL, which is often neglected by literature.
3. I like the table for comparison to previous work, which makes the presentation more clear (although I think there is some missing important literature, which I will mention in the weakness section).
4. The proposed algorithm is both computationally and sample efficient.
Weaknesses: 1. The first concern is the ‘linear function approximation’ setting, which is restricted. Actually, the main motivation that this paper studies function approximation beyond tabular settings is large state (or action) spaces in practice. However, in real settings, the linear function approximation assumption hardly ever holds. Even in Table 1, many algorithms in previous work apply to general function approximation, which further makes the setting studied in this paper restricted.
2. Algorithm 1 in this paper achieves a $O(\epsilon^{-4})$ sample complexity. This is in terms of expectation (as shown in Theorem 3.2) instead of high probability. The previous results that the authors are comparing to are high probability bound (e.g., [1,2]), so it would be more comparable if the authors could also show a $O(\epsilon^{-4})$ sample complexity bound under high probability. Also, since the previous work studies general function approximation while this paper studies only linear function approximation, it is hard to say that a $O(\epsilon^{-4})$ sample complexity bound in linear function approximation setting is better than a $O(\epsilon^{-5})$ bound in general function approximation setting. Moreover, [3] achieves the near-optimal sample complexity $O(\epsilon^{-2})$ with near-identical settings of [1,2]. (So I disagree with the statement that ‘It is very important to notice that no practical algorithm for this setting so far, including ours, can match the minimax optimal sample complexity rate of $O(\epsilon^{-2})$’. Therefore, a $O(\epsilon^{-4})$ in linear function approximation is not that attractive compared to previous work.
3. The authors use an LP formulation of offline RL. I think it would be better to compare to other work using LP formulation, e.g. [4,5], where [4] is computational and sample efficient under partial data coverage and general function approximation, and [5] achieves near-optimal sample complexity under similar settings.
4. The authors compare the computational complexity. However, it is not that direct to compare an $O(n)$ complexity in linear settings to a $O(n^{7/5})$ complexity in general settings. If the authors really want to demonstrate that their algorithm has better computational complexity, it would be better to do some simulations in the same environment (even in some toy examples).
5. Another advantage that the authors claim is that their algorithm could be adapted to average-reward setting. However, neither did the authors emphasize and explain the importance and challenges of average-reward settings, nor discuss why (or whether) previous work could not be adapted to average-reward settings. I suggest the authors discuss this a bit more.
**References**
[1] Xie, T., Cheng, C. A., Jiang, N., Mineiro, P., & Agarwal, A. (2021). Bellman-consistent pessimism for offline reinforcement learning. Advances in neural information processing systems, 34, 6683-6694.
[2] Cheng, C. A., Xie, T., Jiang, N., & Agarwal, A. (2022, June). Adversarially trained actor critic for offline reinforcement learning. In International Conference on Machine Learning (pp. 3852-3878). PMLR.
[3] Zhu, H., Rashidinejad, P., & Jiao, J. (2023). Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning. arXiv preprint arXiv:2301.12714.
[4] Zhan, W., Huang, B., Huang, A., Jiang, N., & Lee, J. (2022, June). Offline reinforcement learning with realizability and single-policy concentrability. In Conference on Learning Theory (pp. 2730-2775). PMLR.
[5] Rashidinejad, P., Zhu, H., Yang, K., Russell, S., & Jiao, J. (2022). Optimal conservative offline rl with general function approximation via augmented lagrangian. arXiv preprint arXiv:2211.00716.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Would it be easy to convert the result from the expectation form to the high-probability version?
2. Is it easy to modify previous algorithms for the average-reward settings? If not, what is the difficulty?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and for aknowledging many of the strengths of our work.
We agree. Two of the main weaknesses of our work are that it is limited to the linear MDP setting, while many related works consider a more general function approximation setting; and the absence of bounds in high probability. Expanding our approach to work under more general function approximation and proving bounds in high probability are definitely interesting directions for future work.
> Also, since the previous work studies general function approximation while this paper studies only linear function approximation, it is hard to say that a $O(\varepsilon^{-4})$ sample complexity bound in linear function approximation setting is better than a $O(\varepsilon^{-5})$ bound in general function approximation setting.
We now realize that our wording may have been misleading --- apologies for that! Our intention was not to claim that our method is better in every way than previous works. In fact we believe that comparing rates and constants along a variety of settings is an overly reductive view of progress in the area, and that all contributions should be evaluated on grounds of their intellectual value. We elaborated more about this point in our general comment.
We also thank the reviewer for referencing the papers of [Zhan et al 2022][1], [Rashidinejad et al 2022][2] and [Zhu et al 2023][3]. This line of works is very interesting, and indeed very much related to our paper. We will add a paragraph to the final draft with a detailed comparison, as well as update our table accordingly (see the attached pdf).
[1]: https://proceedings.mlr.press/v178/zhan22a.html
[2]: https://arxiv.org/abs/2211.00716
[3]: https://arxiv.org/abs/2301.12714
> 4. The authors compare the computational complexity. However, it is not that direct to compare an $O(n)$ complexity in linear settings to a $O(n^{7/5})$ complexity in general settings. If the authors really want to demonstrate that their algorithm has better computational complexity, it would be better to do some simulations in the same environment (even in some toy examples).
We again apologize if our wording was not very clear. As said before our intention was not to claim the general superiority of our method. However, regarding a comparison with respect to computational efficiency, please note that most of the works we reference are indeed oracle-efficient, but the computational complexity of the oracles is left unspecified. This may require exhaustive search over large discrete function classes in the worst case. In contrast, our methods come with tight, concrete computational complexity guarantees on the number of elementary operations being used.
> 5. Another advantage that the authors claim is that their algorithm could be adapted to average-reward setting. However, neither did the authors emphasize and explain the importance and challenges of average-reward settings, nor discuss why (or whether) previous work could not be adapted to average-reward settings. I suggest the authors discuss this a bit more.
We agree with your assessment here. We will update our draft accordingly. Please refer to our answer to reviewer C22H for more details about this topic.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response as well as the attached table. However, I still have some concerns regarding the contribution of this work.
For the attached table, the sample complexity of ALMIS should be $O(1/\epsilon^2)$ instead of $O(1/\epsilon^3)$. Is this a typo or am I missing something? Also, the authors added a column denoting whether the algorithm is oracle-based. However, all algorithms dealing with general function approximation need an optimization oracle (otherwise, all they can do is enumerate all functions in the function class since it is a general function class without any additional assumptions). Therefore, it is unfair (or unnecessary) to discuss whether it is oracle-based when comparing linear function approximation to general function approximation.
I agree with the author that the sample complexity should not be the only dimension to compare different algorithms. However, according to the table, it seems the only dimension among multiple ones that the proposed algorithm beats other ones is whether it can be extended to the average reward case. However, considering restrictions in other dimensions (linear MDP, suboptimal sample complexity, no empirical evaluation), it remains unclear whether the current algorithm contributes enough to the community.
The authors also mentioned that their bound reflects the correct comparator-dependent quantity, which I agree with. From my understanding, this quantity corresponds to the partial data coverage assumption, which only requires that the offline dataset covers the optimal policy (or the target policy). However, many recent works also assume partial data coverage, so they all use the correct notion. Given that, the coverage notion does not seem to be a significant advantage over previous works.
Overall, I appreciate the point of average reward setting and the new method to formulate linear MDP. However, it still has many restrictions. I believe it would be great work if (some of) these restrictions could be addressed. For the current state of the paper, I slightly lean towards rejection and would keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback, and for raising your evaluation of our work. Please see further clarification on your comments below.
Yes, the sample complexity of ALMIS is $O(\epsilon^{-2})$ not $O(\epsilon^{-3})$. Thank you for bringing this to our attention. We will update our table accordingly.
Regarding your comment on our table, you do make a fair point. However, our focus in the paper is on linear function approximation; a special case of general function approximation. For this reason, we believe that excluding algorithms that were developed for general function approximation would unfair. That said, we would like to emphasize that, the fact that oracle access is common in works focused on general function approximation is of little interest for our comparison. Rather, what we care about is whether these methods are efficient once they are applied to the linear case. When this is not the case, or is not clear from the papers, such methods were categorized as "oracle-based". This does not imply that these methods cannot be made efficient with some work, but that's beyond our scope.
We are happy that you agree that sample complexity should not be the only dimension to compare different algorithms. We also appreciate your acknowledgement of our achievement in offline learning in the average reward setting. However, we think our contribution goes beyond tackling the average reward setting. To this end, and, with regards to our concrete contributions to the RL community, please refer to our response to your previous comments as well as paragraph 4 of our general response.
Once again, thank you for acknowledging our efforts and reconsidering your evaluation of our work. We also appreciate you drawing our attention to the limitations, because, like with every research area, these will make a great foundation for future work. | Summary: This paper studies offline reinforcement learning with linear function approximation. They propose a primal-dual algorithm, formulating linear RL into a minimax problem and solving it with gradient descent-ascent. Sample complexity analysis is provided for infinite-horizon discounted and average-reward MDPs, where the rate is $O(\frac{1}{\epsilon^4})$ for both settings.
Strengths: 1. The algorithm is primal-dual and thus easy to implement in practice.
2. The paper provides rigid theoretical analysis.
Weaknesses: 1. The newly defined coverage ratio $C_{\phi,c}$ is a little strange when $c\neq \frac{1}{2}$. For example, when we choose $c=1$ and thus we don't need the knowledge of $\Lambda$, the coverage ratio $C_{\phi,1}=\sum_{x,a}(\frac{\mu^*(x,a)}{\mu_B(x,a)})^2$. Then when $\mu^*=\mu_B$, $C_{\phi,1}$ will become $|X||A|$. However, in the literature, when the behavior policy is the same as the optimal policy, the coverage is typically 1. The authors claim that we can estimate the $\Lambda$ via the offline dataset so that we can choose $c=\frac{1}{2}$, but do not provide any theoretical analysis about this point. I will be more convinced if the authors can give more rigid proofs for this method.
2. The sample complexity is worse than the typical rate $\frac{1}{\epsilon^2}$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> 1. The newly defined coverage ratio $C_{\phi,c}$ is a little strange when $c\neq \frac{1}{2}$. For example, when we choose $c=1$ and thus we don't need the knowledge of $\Lambda$, the coverage ratio $C_{\phi,1}=\sum_{x,a} (\frac{\mu^*(x,a)}{\mu_B(x,a)})^2$. Then when $\mu^*=\mu_B$, $C_{\phi,1}$ will become $|X||A|$. However, in the literature, when the behavior policy is the same as the optimal policy, the coverage is typically 1. The authors claim that we can estimate the $\Lambda$ via the offline dataset so that we can choose $c=$, but do not provide any theoretical analysis about this point. I will be more convinced if the authors can give more rigid proofs for this method.
Reducing the linear function approximation case to the tabular case can be useful to gain some intuitive understanding, however one must be very careful in drawing conclusions from it. The fundamental motivation behind the linear function approximation setting, is that the state space can be too big to handle. For this reason we replace every dependence to $|\mathcal{X}|$ with a dependence on the feature space dimension $d$, which is instead assumed to be small enough to be manageable. Reducing this setting to the tabular case implies taking $d=|\mathcal{X}||\mathcal{A}|$, which violates said assumption, unless we concede that $|\mathcal{X}|$ is now maneagable as well.
Having said this, we anknowledge that our coverage ratio is proportional to $|\mathcal{X}||\mathcal{A}|$ in this case. However, this does not necessarily imply our bounds to be worse, because the coverage ratio should not be considered in isolation but in the context of the bounds it is used in; and in the setting we are now considering having a dependence on $|\mathcal{X}||\mathcal{A}|$ is definitely acceptable. Moreover, even if this resulted in a worse than optimal dependence on $|\mathcal{X}||\mathcal{A}|$, please keep in mind we are considering a degenerate case, and that we had to make design decision to tackle problems which are not present in the tabular case. To be more specific, in the tabular case one could very easily work with state-action occupancy measures and get rid of the double loop of our algorithm, immediately halving the sample complexity.
On the subject of estimating $\Lambda$, we find it plausible that one can replace the exact feature covariance matrix with an estimate that is built either offline using some fraction of the overall sample budget, or online using all past data. In particular, in the tabular case, we could simply use all data to estimate the visitation probabilities of each-state action pairs and use them to build an estimator of $\Lambda$. Details of a similar approach have been worked out by Gabbianelli et al. (ALT 2023) for the bandit case. We believe that the same approach should work for more general linear function approximation as well, but that working out the details of such an approach would make our already rather complicated paper even more complex. An idea would be to directly estimate $\Lambda^{-1}$ using the matrix geometric resampling method proposed by Neu and Olkhovskaya (COLT 2020). Finally, we would like to remark that one of the appeals of our method is precisely that you can avoid this step and never estimate $\Lambda$, at the price of inflating the coverage ratio.
> 2. The sample complexity is worse than the typical rate $\frac{1}{\epsilon^2}$.
While it is true that the optimal sample complexity is $O(\varepsilon^{-2})$, it is certainly not *typical*, unless we are considering easier settings.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I don't think sacrificing the coverage ratio for avoiding estimating $\Lambda$ is a good idea since the authors have proposed a feasible method to estimate $\Lambda$, especially when the coverage ratio could be inflated many times potentially. I still recommend the authors to conduct a thorough analysis for the estimation process since the setting is different from the previous works after all and I will maintain my scores for now.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback! We remain at disposal for further questions, and are hopeful you will take into consideration our general rebuttal as well, in your final evaluation. | Rebuttal 1:
Rebuttal: We thank all reviewers for the work invested into evaluating our paper, and the thoughtful feedback they shared with us. We are also glad about the many strengths of our work which have been highlighted, signifying that the amount of effort we put in this paper has not gone unnoticed.
No work comes without limitations, and overall, we also agree about most of the weaknesses which have been pointed out. In particular, our work is limited to a very specific setting (linear MDPs) while there are works which study more general settings; and we only provide bounds in expectation, as opposed to more desirable high probabilities bounds. We believe these are all interesting directions for future work, which however do not overshadow the significance of our contribution.
Another important factor emerged from the reviews: our comparison with the state of the art could use some small improvements. Luckily, this is something we can easily fix in our final draft, thanks to the feedback shared by the reviewers. We already tried to address all the concerns, as best as possible, in our answers, and hope to have shed some more clarity. The main source of confusion seemed to be our table, which we updated and attached to this answer as a PDF (as suggested by reviewers we fixed some of the rates and added some missing references). It is indeed non-trivial to compare with many different methods, across different settings and assumptions. Anyhow, the questions we received gave us the chance to dig deeper into some of the related works, which in turn strengthened our conviction that our method is very competitive in the settings it is designed for. It is one of the few -- the only we are aware of actually -- computationally efficient method for linear MDPs with *proven* sample complexity guarantees, and as far as we are aware, the first to tackle the average reward setting. However, at the same time, this process made us realize that we probably gave too much importance to the comparison of rates and constants -- tables are good for an overview, but research cannot be reduced to a table. By doing so, we took away some of the light from what we believe are some of the great strengths of our contribution.
For this reason we wish to emphasize that, in our minds, one of the main values of our work lies in the novelty of our approach. While there are other LP-based approaches out there (as some of the reviewers pointed out), these use completely different parametrizations to deal with function approximation. Our parametrization is derived from different principles and we have demonstrated that it can provide bounds that scale with the *correct* comparator-dependent quantity without any pessimistic adjustment (see more about this below). This is a conceptual novelty that we think should be appreciated, and we strongly feel that it is worth sharing these ideas with the community. Indeed, the fact that such a new approach does not immediately attain the best possible sample rate (similarly to other computationally efficient algorithms) is not surprising, and rejecting new ideas because they do not immediately lead to strict improvements along **every** possible dimension is arguably rather shortsighted.
Besides the originality of our method, we would like to point out another significant advantage over previous works that has to do with our definition of coverage ratio. Although we did provide a detailed comparison of different notions of coverage in Section 6 of the paper, additional insights that we gained since the submission allow us to say that our definition of coverage ratio is better in a fundamental way than the more common definitions, and in fact is *the* correct notion for linear function approximation.
This kind of coverage ratio appeared before in the literature, for instance for finite-horizon MDPs (see "Provable Benefits of Actor-Critic Methods" by Zanette et al. (2021)), but never, to our knowledge, for the settings we consider. To keep this remark short, let us compare our ratio from Definition 3.1. when $c=1/2$ with the more common ratio from Equation 17. You can easily see that our ratio only requires the behavior policy to cover well *a single direction* in feature space, namely the expected feature vector under the optimal policy. In comparison, the ratio from Equation 17 requires the behavior policy to cover well *the entire subspace* spanned by features visited by the optimal policy, which is much more demanding. Our "single-direction" property continues to hold when $c=1$. So, although we cannot prove that our ratio $C_{\phi,1}$ is always smaller than other versions, one can use this geometric property to build examples where the classic ratio (eq. 17) is arbitrarily larger than $C_{\phi,1}$ (see the answer to Reviewer 6BYu for one such example). In practice, we expect this property to make a big difference when the feature dimension $d$ is large. To fully complement our discussion from Section 6, note that the ratio from Equation 18, which is the "linear" specialization of the notion of coverage that is commonly considered in works on general function approximation (such as Xie et al.), does *not* display the single-direction property either.
Pdf: /pdf/0ad09115d07e7135804bdf783dd6c9ea063ee610.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers the problem of offline reinforcement learning (RL) for linear Markov Decision Processes (MDPs) under the infinite-horizon discounted and average-reward settings. The authors propose a primal-dual optimization method based on the linear programming formulation of RL, which allows for efficient learning of near-optimal policies from a fixed dataset of transitions under partial coverage. The proposed algorithms improve the sample complexity compared to previous methods from $O(\epsilon^{-5})$ to $O(\epsilon^{-4})$ under the discounted setting and provide the first line of result in the average-reward setting with realizable linear function approximation and partial coverage.
Strengths: 1 The proposed algorithm improves existing algorithms in both statistical efficiency and computational efficiency under the discounted reward setting with the linear function approximation (we note the baseline may handle problems beyond the linear MDPs).
2 The algorithms presented in this paper do not explicitly leverage the principle of pessimism, but focus on the linear programming formulation of MDP, and rely on a new reparametrization trick extended from the tabular case. The technique itself seems to be novel to me.
3 The algorithms present the first line of work for the offline average-reward MDP.
4 The paper is easy to follow, with a thorough comparison with existing work that clearly positions the results in the literature.
Weaknesses: 1 I am confused about the requirement of $\Lambda$ to be invertible (line 140) as this seems to be very closely related to the uniform coverage condition where we assume that the smallest eigenvalue of $\Lambda$ is lower bounded from zero. I am wondering what is the key difference between them. Can you elaborate on this with some intuitions or examples?
2 The authors discuss the relationship between the coverage condition considered in this paper and that of [1] and show that the coverage condition is a low-variance version of the standard feature coverage ratio if $c=1/2$. However, in this case, the algorithm explicitly uses $\Lambda$, while the PEVI proposed in [1] does not. In contrast, $c=1$ leads to a worse bound but we do not need the knowledge of $\Lambda$. Could you provide a more detailed characterization or example to illustrate the difference between these two cases?
typo: line 328, $\epsilon^2 \to \epsilon^{-2}$
[1] is pessimism provably efficient for offline rl
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: see weakness
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 1. I am confused about the requirement of $\Lambda$ to be invertible (line 140) as this seems to be very closely related to the uniform coverage condition where we assume that the smallest eigenvalue of $\Lambda$ is lower bounded from zero. I am wondering what is the key difference between them. Can you elaborate on this with some intuitions or examples?
Thank you for raising this very interesting point!
Indeed, in the tabular case $\Lambda$ reduces to $\mathrm{diag}(\mu_B)$ and therefore invertibility implies that the behavior policy explores the whole state-action space (uniform coverage). However, when considering linear function approximation, invertibility of $\Lambda$ just implies coverage of the *feature space*, which is very different than the usual notion of uniform coverage. Infact, many works use similar definitions of **partial** coverage where a $\Lambda^{-1}$ term appears (see our Discussion section and eq. 17 in particular).
Nonetheless, your question is interesting, because it turns out we do not even require uniform coverage of the feature space!
Certainly, if the behavior policy only covers a subspace of the feature space, $\Lambda$ is not invertible. However, this is only a problem if the "optimal direction" $\lambda^* = E_{\mu^*}[\phi(x,a)]$ does not belong to this subspace. In this case we cannot do much, and we must set the coverage to $+\infty$. On the other hand, when the optimal direction *does* belong to the subspace visited by the behavior policy, we can just focus on this subspace. Some other directions may not be seen in the data but this does not matter. This implies we can use the Moore-Penrose pseudoinverse $\Lambda^\dagger$ and the substitution $\lambda \gets \Lambda^\dagger\beta$. This is possible because when $\lambda^*$ is in the range of $\Lambda$, we have $\Lambda^\dagger\Lambda \lambda^* = \lambda^*$ (notice that $\Lambda^\dagger\Lambda$ projects any vector into the range of $\Lambda^T = \Lambda$). The coverage ratio thus becomes $C=(\lambda^*)^T\Lambda^\dagger\lambda^*$.
We will mention this in the final version of the paper to avoid confusing future readers. Thank you again for bringing this up!
> 2. The authors discuss the relationship between the coverage condition considered in this paper and that of [1] and show that the coverage condition is a low-variance version of the standard feature coverage ratio if $c=1/2$. However, in this case, the algorithm explicitly uses $\Lambda$, while the PEVI proposed in [1] does not. In contrast, $c=1$ leads to a worse bound but we do not need the knowledge of $\Lambda$. Could you provide a more detailed characterization or example to illustrate the difference between these two cases?
In the general case, we don't know how to make the numerical comparison of the three ratios ($C_{\phi,1}$, $C_{\phi,1/2}$, and $C^\diamond$, the one from PEVI and Equation 17) more precise than what is already in the discussion section, which the reviewer has correctly summarized: $C_{\phi,1/2}$ is always smaller than $C^\diamond$, but $C_{\phi,1}$ and $C^\diamond$ are not comparable in general. However, we would like to remark a clear advantage of our ratio which holds for any value of $c$: the decoupling of the "optimal feature" vectors $\lambda^*=E_{\mu^*}[\phi(x,a)]$, that we have so far presented in terms of "variance reduction", implies that only the "average optimal direction" $\lambda^*$ needs to be covered by the behavior policy. In comparison, PEVI's ratio $C^\diamond$ requires the behavior policy to cover the whole subspace of features visited by the comparator policy. Using this fact, we can actually construct a small numerical example where $C^\diamond$ is arbitrarily larger than $C_{\phi,1}$:
Consider a one-state linear MDP with four actions and a single relevant step (this can be realized in the discounted setting by adding an absorbing state). One can construct it in such a way that there are only 4 possible feature vectors: $\phi_1 = [1, 0]$, $\phi_2 = [1,1]$, $\phi_3 = [0, 1]$ and $\phi_4 = [-1, -1]$ (the feature dimension is $d=2$).
The comparator policy visits $\phi_1$ with probability $1/2$ and $\phi_3$ with probability $1/2$.
The behavior policy visits $\phi_1$ with probability $\epsilon$, $\phi_2$ with probability $(1-\epsilon)/2$, and $\phi_4$ with probability $(1-\epsilon)/2$, where $0<\epsilon<1/2$.
By plotting the four feature vectors you can already see that the features visited by the optimal policy span all of $\mathbb{R^2}$, while for $\epsilon=0$ the features of the behavior policy would be confined to a $1$-dimensional subspace. However, the average optimal feature $[1/2, 1/2]$ belongs to this line. Indeed, you can check that PEVI's ratio is $C^\diamond = \frac{2-\epsilon}{2\epsilon(1-\epsilon)}$, which can be arbitrarily large for small values of $\epsilon$. Instead, our ratio $C_{\phi,1}=\frac{1}{4(1-\epsilon)^2} < 1$. Of course, $C_{\phi,1/2}=\frac{1}{4(1-\epsilon)}$ is even smaller.
In practice, we expect this "single direction" property to make a big difference when the feature dimension $d$ is large. Finally we would like to remind that PEVI considers the finite-horizon setting, which is considerably easier than the one we consider.
> typo: line 328, $\epsilon^{2}\to\epsilon^{-2}$
Thanks for catching this!
---
Rebuttal Comment 1.1:
Title: thanks for the response
Comment: Thanks for the clarification. I would keep my score toward acceptance.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive inclination towards our work. We remain available in case you have further questions or concerns. | Summary: The authors investigate offline RL in linear MDPs and introduce a novel LP-based method. They assert that their proposed approach achieves the lowest sample complexity of $O(1/\epsilon^4)$ among computationally efficient algorithms. In comparison, existing computationally efficient algorithms can achieve $O(1/\epsilon^5)$. Additionally, the author's theory can be extended to the average reward setting.
Strengths: * To the best of the author’s knowledge, in offline linear MDPs, the result in the average-reward setting is novel.
* The LP formulation in linear MDPs is worthwhile to investigate
Weaknesses: * I am uncertain about whether it is appropriate to claim that existing offline RL algorithms in linear MDPs achieve $O(1/\epsilon^5)$. It appears that [38] may have better sample complexity. In Table 1 of the manuscript, the author mentions that [38] cannot handle the discounted setting. However, extending from the finite-horizon to the discounted infinite-horizon setting is relatively straightforward. Hence, this comparison may not be entirely fair. If [38] indeed has better sample complexity, it significantly impacts the author's contribution. Thus, I currently rate the paper with a score of 4.
* I am not entirely certain about the significance of the extension to the average reward case.
* Presently, I cannot determine whether the reason [9] and [36] cannot handle the average reward case is due to the algorithms or their analysis. If this limitation arises from their analysis, their algorithm has the potential to be superior as it can handle more general MDPs.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you provide further elaboration on my concerns regarding (1) the comparison between the author's work and [38], and (2) the third point in the weakness section?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: They discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time spent reviewing our paper, and the feedback you provided!
> * I am uncertain about whether it is appropriate to claim that existing offline RL algorithms in linear MDPs achieve $O(1/\epsilon^5)$. It appears that [38] may have better sample complexity. In Table 1 of the manuscript, the author mentions that [38] cannot handle the discounted setting. However, extending from the finite-horizon to the discounted infinite-horizon setting is relatively straightforward. Hence, this comparison may not be entirely fair. If [38] indeed has better sample complexity, it significantly impacts the author's contribution. Thus, I currently rate the paper with a score of 4.
In [38], the authors do report a sample complexity of $O(\epsilon^{-2})$ when the data generation distribution (behavior policy) is fixed. Indeed, this is an improvement over our $O(\epsilon^{-4})$ guarantee. However, their method specified in Equation 10 exploits a backward induction technique which is only feasible in the finite horizon setting. It is unclear how such arguments can be generalized without compromising the rate to the infinite-horizon settings we consider in the present paper. More generally, extending results on linear MDPs beyond finite horizon, even in the online setting has been historically challenging.
> * I am not entirely certain about the significance of the extension to the average reward case.
Like many people in the RL theory community, we believe that the average-reward setting is the most challenging one. While challenge is obviously not equal to significance, we find it remarkable that our approach works in this setup just as well as in the discounted-reward setting. Indeed, as illustrated by Table 1, most previous work in the literature is specific to either the finite-horizon case or the discounted setting, and transferring ideas from one setting to the other is typically hard (especially when going from finite horizon to discounted). In contrast, our approach readily works for two of the most challenging settings, which we believe is a significant contribution. More generally, we believe that most real-world RL problems are most naturally formulated in one of these two infinite-horizon frameworks, and that the finite-horizon framework, which is very commonly studied in theory, is insufficient for addressing most tasks of practical interest.
> * Presently, I cannot determine whether the reason [9] and [36] cannot handle the average reward case is due to the algorithms or their analysis. If this limitation arises from their analysis, their algorithm has the potential to be superior as it can handle more general MDPs.
The algorithms of [9] and [36] are based on ideas from the approximate dynamic programming literature, which are often limited to the discounted setting. Concretely, their approach makes crucial use of the contractive property of the discounted Bellman operators, which does not generally hold in the average-reward setting (especially not under the general assumptions we make in our work). This not only limits their analysis but also the applicability of the algorithm that relies on a policy evaluation procedure that is not readily available for average-reward MDPs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive responses. I still have some uncertainty regarding the statement regarding [38], "which is only feasible in the finite horizon setting." It appears to me that an analogous approach could be applicable to the infinite horizon discounted setting as well. I will maintain my current evaluation score and ask Area Chair to check this point. While I am retaining my score presently, I am open to raising the score later if I am incorrect.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response, and for being open to raise your score.
Your initial question prompted us to reason on how to extend the cited work to our setting. However, we were not able to prove a *straightforward* reduction (see below for details). For this reason, we suspect that if such a trivial reduction were possible, it would have been mentioned directly by the authors.
The central part of the algorithm of Zanette et al (Algorithm 1) is the critic procedure `PLSPE` (shown in Algorithm 2), which is invoked at each iteration $t$ of the main loop. This procedure aims to solve a convex problem (Eq. 10a to 10c) which requires optimizing variables of size $dH$ and has a number of constraints proportional to $H$. Thus, it is impossible to run this procedure "as is" in the setting we consider, because in our case $H=\infty$. One idea could be to try to replace $H$ with the effective-horizon $1/(1-\gamma)$. This might be a good approach, but it is not immediately clear if it would work, and it also presents some downsides. For example, it would result in a non-stationary policy, which in practice is something very unappealing (i.e., imagine having to store 99 neural networks as opposed to 1). Moreover, the effective-horizon is just the average length of an "episode", and it would be unclear which policy to use when the interaction lasts longer than $1/(1-\gamma)$. Even addressing all of these concerns, it remains questionable if their algorithm would give us any advantage in terms of computational efficiency, since it requires computing a quantity proportional to $(d|A|n)$ at each iteration, where $n$ is the size of the dataset (see Eq. 10b).
Obviously, this does not imply that extending the work of Zanette et al to the infinite-horizon setting is impossible. In fact we are confident that by substantially changing their method it must be possible. For this reason, if you have some specific approach in mind, we encourage you to share it with us, so that we can discuss it together.
As an alternative approach, one could also try to completely replace the critic procedure of Zanette et al with something specifically designed for the infinite-horizon setting. In our opinion, this could result in something similar to PSPI (see Algorithm 1 of [Xie et al][1]), which uses a pessimistic policy evaluation procedure, which can be efficiently implemented with a least-squares based approach. However, the resulting method does not attain the optimal sample complexity either. Please refer to our answer to reviewer 7fb2 for a more detailed comparison of our method and PSPI.
We hope to have helped addressing your uncertainty, and remain available for further questions.
[1]: https://arxiv.org/pdf/2106.06926.pdf | null | null | null | null |
Understanding How Consistency Works in Federated Learning via Stage-wise Relaxed Initialization | Accept (poster) | Summary: The manuscript proposes a novel federated learning method to alleviate the negative impact of the "client drift" problem and enhance consistency in the FL paradigm. The manuscript also analyzes the intrinsic impact of local consistency on optimization error, test error, and generalization error. Several experiments are conducted to validate the efficiency of the proposed method.
Strengths: 1. The proposed method outperforms baseline methods without increasing communication costs and can be easily incorporated into other methods.
2. The manuscript provides ample theoretical analysis to demonstrate the theoretical bounds of the proposed method.
Weaknesses: 1. The hyperparameters were not tuned for different methods. Since the experiments are highly sensitive to hyperparameter settings, the superiority of the proposed method is likely due to the selection of hyperparameters. Given that the personalized process proposed in the manuscript bears resemblance to the momentum of parameter updates, I suspect that when tuning the learning rate for the baseline method, the baseline method may outperform the proposed method.
2. The results shown in Figure 1 suggest that the proposed method is highly sensitive to the choice of beta.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The manuscript thinks that inconsistent local optima cause "client drift," which results in the inferiority of federated learning (FL) algorithms. The proposed method aims to alleviate this divergence by employing personalized relaxed initialization at the beginning of each round. I have a question: based on my understanding, personalized initialization in each round would lead to greater differences across different clients. So, how does this strategy help alleviate the divergence?
2. After a brief review of the theoretical analysis, I am uncertain if I correctly understood the proof. In formulas (9) and (14), the upper bounds achieve their minimum when beta=0. If that's the case, why does the personalized relaxation contribute to improving the performance of FL algorithms?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The experiments in the manuscript are not rigorous enough as the hyperparameters for different methods were not tuned. This could potentially lead to an overestimation of the performance of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and comments on our work. We'll answer your questions one by one, including some misunderstandings and some essential academic questions. We are also very honored to share some of our understandings with you.
## About the question of "The hyperparameters were not tuned for different methods.":
**We have finetuned the hyperparameters and all baselines are carefully tuned and compared fairly.** Due to the page limitation, the details are stated on **Line.738 (Table.6)** and **Line.742 (Table.7)** in the **Appendix** and we also submit the code in the Supplementary Material.
We test all algorithms and select parameters that most algorithms can be trained well to report as the best selection in Table.6, which is also widely used in many previous works. For instance, in [1], they provide the hyperparameters on the page.15 (Appendix) which are similar to ours. In [2], they provide the hyperparameters in Section E.2, which adopt the same learning rate as ours. In [3], they provide the hyperparameters selection in section C.2 and most parameters are the same as ours. Additionally, we also finetuned personalized hyperparameters in Table.7. Moreover, we use the same set of hyperparameters to test the experiments of adding the relaxed initialization as a plug-in to the current methods, i.e. SCAFFOLD and FedSAM, and they also could be improved by the RI, as shown in Table.2. Our proposed RI is not a trick based on the specific hyperparameters selection. It works both theoretically and experimentally. Additional experiments of re-tuning lr are still running and we will reply soon when it is finished.
**We also use the sampling with replacement in the dataset split**. Compared to **sampling without replacement**, our split is more difficult and the performance will drop a lot on all methods. It means one data sample may exist in more local clients. This situation is more realistic because, on large-scale federated clients, the probability of data duplication is very high.
[1] Federated Learning Based on Dynamic Regularization (ICLR 2021)
[2] Generalized Federated Learning via Sharpness Aware Minimization (ICML 2022)
[3] FedCM: Federated Learning with Client-level Momentum
## About the question "The results shown in Figure 1 suggest that the proposed method is highly sensitive to the choice of beta.":
We have proven that the coefficient $\beta$ has a maximum value (see $\beta$ selection range in Theorem 6). If it exceeds this maximum value, the method may not converge. Figure.1 demonstrates that our proof matches the experiments. In the experiments, we suggest selecting this coefficient as a small positive value, i.e. 0.05, 0.1. It could be seen in Figure.1, it is very stable in its valid range.
## About the question "I have a question: based on my understanding, personalized initialization would lead to greater differences across different clients. So, how does this strategy help alleviate the divergence?":
Our motivation is that if the current initialization is farther away from the local optimal solution than the global server model, then after the local training is finished it will also be farther away from the local optimal solution than before. To implement this, we propose relaxed initialization (RI). The working mode of RI is similar to the idea of "lookahead". Differently, (1) "lookahead" only works at the end of each stage; (2) "lookahead" only works for the global models on the global server. However, RI helps each local client to backtrack a small distance at the beginning of each stage. Therefore, after the local training in the next stage, the trained local model will get close to each other than before. We provide a figure to illustrate the principles of the relaxed initialization in the one-page .pdf file.
## About the question of "In formulas (9) and (14), the upper bounds achieve their minimum when beta=0.":
Formulas (9) and (14) are not the final conclusion. We aim to find the best selection for the excess risk instead of the optimization only. Here we briefly introduce the final conclusion in section 4.3.4 (formula (18)).
We combine the Theorem.2, 3, 5 and bound the excess risk as:
$$
\mathcal{E}_E\leq \widetilde{\mathcal{O}}\left(\frac{D+L(\sigma_l^2 + KG^2)}{NKT}\right) + \mathcal{O}\left(\frac{1}{S}\left[\sigma_l(TK)^{cL}\right]^\frac{1}{1+cL}\right) + \widetilde{\mathcal{O}}\left(\frac{\sqrt{D+G^2}K^\frac{cL}{1+cL}}{T^\frac{1}{1+cL}}\right).
$$
Formula (9) only indicates the optimization bias term. Formula (14) indicates the latter two terms as the generalization bias. Because the constant part in the convergence term $\Delta^t$ also contains $\beta$, formula (14) is not the final version. In above upper bound, it contains three main parts. The first part is the optimization error, the second part is the stability bias which is related to the total number of samples, and the dominant part is the divergence bias which comes from the consistency term. Here we omit the constant part on the above formula. **In the paragraph "Selection of $\beta$" (Line.284)**, we discuss it in detail. The constant part of the dominant term (related to $\beta$) is $\frac{(1+\beta)^{\frac{1}{\beta cL}}}{\sqrt{1-96\beta^2}}$ (Line.287) where $0<\beta<\frac{\sqrt{6}}{24}$. It can be easily verified that when $\beta$ is selected as a specific positive number the constant term achieves the minimal.
We summarize them as follows:
(1) $\beta=0$ only helps to minimize the optimization error.
(2) It is a trade-off between optimization and excess risk.
(3) To minimize the dominant term of the excess risk, $\beta$ should be selected as a small positive number instead of 0.
### It is a pleasure to discuss this with you, which will help us to improve this work further. We explain and prove the concerns mentioned in the reviews. If there are any questions, we are happy to continue the discussion with you. Thank you again for reading this rebuttal.
---
Rebuttal Comment 1.1:
Title: Additional Experiments on Tuning Learning Rate
Comment: Dear reviewer g4w5, to address your concerns on the finetuning learning rates, we re-tuned the learning rate to show that our hyperparameters selection is correct. Due to the rebuttal time limitation, we test the experiments on the following setup.
CIFAR-10 Dirichlet-0.1 split, 10 active clients of total 100 clients, other hyperparameters are selected as we introduced in our paper (Appendix)
Generally, the global learning rate is set as 1.0 in FL except for some specific methods. Global lr=1.0 means to average the local models. We follow this to fix the global learning rate and finetune the local learning rate as follows.
| Method | lr=0.05 | lr=0.1 | lr=0.2 | lr=0.5 |
| :----: | :----: | :----: | :----: | :----: |
| SCAFFOLD | 73.65% | **75.19%** | 74.93% | 53.16% |
| FedCM | 69.56% | **74.14%** | 73.73% | 60.32% |
| FedDyn | 71.74% | 75.10% | **75.27%** | - |
"-": can not converge
As shown in our paper (Appendix), the best selection of the local learning rate is 0.1, which could fairly compare the performance of the whole baselines. We finetune the hyperparameters in a wide range and report the fair and best selection. We also learn from the classical paper [1,2,3] to finish the experiments. Moreover, relaxed initialization could further improve the performance of some current advanced methods. Both theoretical analysis and experiments verify its efficiency. **Relaxed initialization is not a trick that relies on finetuning the learning rate.**
[1] Federated Learning Based on Dynamic Regularization (ICLR 2021)
[2] Generalized Federated Learning via Sharpness Aware Minimization (ICML 2022)
[3] FedCM: Federated Learning with Client-level Momentum | Summary: This paper proposes to initialize the local state by moving away from the current global state toward the reverse direction of the latest local state. They demonstrate theoretically and empirically that this revision can help consistency for better performance. The method is also a practical plug-in that could easily to incorporated into other methods.
Strengths: - This paper goes deeper into how consistency work in FL systems, and provides a simple and effective initialization-based solution.
- This paper gives a comprehensive theoretical analysis of the problem, which I think is a good contribution.
- The paper is well-written and easy to follow.
- Experiment shows good performance.
Weaknesses: - I understand there might not be enough space for experiments and the focus looks to be theoretical analysis, but it would be better if the authors can give more experimental results for different FL settings, e.g., more clients/datasets, which may help evaluate the method better. For example, 100 clients/10% participation rate and 200 clients/ 5% participation rate can be relatively limited. Maybe can we have a curve for participation rate with the same amount of clients and perhaps a curve for number of clients to see how these FL settings influence the performance?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Does the method also work well for a large number of clients, e.g., 1000?
- How does participation rate influence performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have stated the limitation and future work as pFL.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and affirmation of our work. We'll answer your questions one by one in the following, including some misunderstandings and some essential academic questions worth exploring. We are also very honored to share some of our understandings with you.
## About the question "Does the method also work well for a large number of clients, e.g., 1000?"
Thank you very much for this question. Large-scale training is one of the goals pursued by federated learning. We finish the experiments on a total of 1000 clients and summarize their results in the following table.
#### Total clients=1000, CIFAR-10, Partial-participation=0.01 (10 active clients/round), Communication round=500, Dirichlet-0.1 split
Due to the very large number of clients, each client is provided with 50 training samples only. In order to avoid serious overfitting, we reduce the batchsize from 50 to 25. The other hyperparameters are the same as we mentioned in our paper.
| Method | Dirichlet-0.6 | Dirichlet-0.1 |
| :----: | :----: | :----: |
| FedAvg | 63.56% | 58.61% |
| SCAFFOLD | 65.11% | 60.94% |
| FedInit | 65.08% | 60.71% |
Though we do not adjust the coefficient of the relaxed initialization and just use 0.1, the proposed FedInit could still achieve the comparable test accuracy of the SCAFFOLD method. Due to the time limitations of this rebuttal stage, we will fine-tune this parameter in the final version to find the optimal value. However, the improvements of the relaxed initialization are still strong even on the setup of 1000 clients.
| Method | Vanilla | + relaxed initialization | improvement (value) |
| :----: | :----: | :----: | :----: |
| FedAvg | 58.61% | 60.71% (FedInit) | +2.1% |
| SCAFFOLD | 60.94% | 61.47% | +0.53% |
The vanilla FedAvg could be improved by 2% and the vanilla SCAFFOLD could be improved by 0.53% (Dirichlet-0.1). Likewise, we do not finetune $\beta$ and only select it as 0.1. We believe this improvement will increase further after a simple search. Due to the time limitation, we will complete the entire experiment in the next version.
## About the question of "How does participation rate influence performance?":
To explore the performance of the different participation rates, we do the following experiments. Due to the time limitation, we test the proposed FedInit on two setups.
#### Total clients=100, CIFAR-10, Dirichlet-0.1, Communication round=500
| | 5% | 10% | 20% | 30% | 50% |
| :----: | :----: | :----: | :----: | :----: | :----: |
| Top-1 Accuracy | 73.16% | 75.89% | 76.04% | 76.33% | 76.52% |
| Loss (min) | 0.5832 | 0.4799 | 0.4040 | 0.3854 | 0.3706 |
#### Total clients=1000, CIFAR-10, Dirichlet-0.1, Communication round=500
| | 1% | 2% | 5% | 10% |
| :----: | :----: | :----: | :----: | :----: |
| Top-1 Accuracy | 60.61% | 61.83% | 62.30% | 62.12% |
| Loss (min) | 1.1381 | 1.0839 | 1.0462 | 1.0374 |
The same as above, to avoid overfitting, we adjust the batchsize to 25 on the setup of 1000 clients. The other hyperparameters are fixed the same as we adopted in our paper. Both experiments validate the influence of changing the participation ratios. When the number of active clients are increasing, the performance achieves better. This also matches the theoretical analysis. One key problem is that we should make sure that the training process does not overfit when adopting larger scale of clients. We show the loss curve in the one-page .pdf file to validate their trends.
### It is a pleasure to discuss this with you, which will help us to improve this work further. We explain and prove the concerns mentioned in the reviews. If there are any questions, we are happy to continue the discussion with you. Thank you again for reading this rebuttal.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I have read all the reviews from other reviewers and responses from the author. I would like to keep the score.
---
Reply to Comment 1.1.1:
Title: Thank you very much for the reviews
Comment: Dear reviewer sALG, thank you very much for your affirmation of our work. We will add the experiments of total 1000 clients into the Appendix in the next version. Thank you again for this review! | Summary: This paper aims to solve the “client-drift” problem in Federated Learning, which is caused by the NonIID data. Specifically, this paper proposes initializing the local model of each client with its personalized model to alleviate the problem. Further, the paper theoretically analyzes the impact of inconsistency on the convergence of FL. Besides, extensive experiments also demonstrate the effectiveness of the proposed method.
Strengths: 1. The idea of initializing the local model for solving the NonIID problem is interesting.
2. The theoretical analysis for the proposed method is solid.
3. The experimental results demonstrate the effectiveness of the proposed method.
Weaknesses: 1. Although the theoretical results are sufficient to verify the convergence of the proposed method, they cannot present the effectiveness of the initialization strategy in principle. More specifically, how does the proposed method reduce the divergence term compared to the vanilla FedAvg?
2. The workflow of the proposed method seems not correct. Are the locations of line 10 and line 12 placed correctly?
3. How to obtain the value of $w_{i,k}^{t-1}$ is not clear. The client may not participate in the previous round $t-1$ under the setting of random client selection. If the local model $w_{i,k}^{t’-1}$ is obtained in many previous rounds, the motivation of bias correction using $ w_{i,k}^{t’-1} - w^{t-1}$ is not reasonable due to the significant gap between $t’$ and $t$.
4. The key hyperparameter K is set inappropriately. In typical FL literature, the number of local epochs is usually set to range from $5$, $10$, and up to $20$, which may contain many local iterations (mini-batches). However, the experiment of this paper only adopts a small number of local iterations instead of local epochs, e.g., 5 local iterations in Table 1, which seems manually adjusted.
5. It would be better if there is a figure to illustrate the principles of the proposed method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and affirmation of our work. We'll answer your questions one by one in the following, including some misunderstandings and some essential academic questions worth exploring. We are also very honored to share some of our understandings with you.
## About the question of "theoretical results cannot present the effectiveness of the initialization strategy in principle":
Thank you for pointing out this. We discuss our theoretical analysis in each subsection of Section 4 and our theoretical analysis comprehensively explains why FedInit works better than vanilla FedAvg.
Our paper mainly explores the performance of the consistency term in FL to understand how it affects each term. The proposed FedInit method could be considered as a method that adopts a local relaxed initialization on the FedAvg. From the perspective of optimization, under this perturbation, we prove that the order of the convergence rate will not be affected by this perturbation (Thm 1 & 2).
In section 4.3.4, thm.6 indicates that the consistency term mainly affects the generalization rather than optimization in the excess risk. When we consider the PL-condition, the optimization error achieves $\mathcal{O}(1/T)$ and the generalization error achieves $\mathcal{O}(1/T^{\frac{1}{1+cL}})$. The dominant term comes from the consistency term. Therefore, our proof reveals that the consistency term affects the generalization more than the optimization in the vanilla FedAvg method. In Thm.6, we discuss the minimization in the paragraph "Selection of $\beta$". The constant part of the dominant term (related to $\beta$) is $\frac{(1+\beta)^{\frac{1}{\beta cL}}}{\sqrt{1-96\beta^2}}$ (Line.287) where $0<\beta<\frac{\sqrt{6}}{24}$. It can be easily verified that when $\beta$ is selected as a specific positive number the constant term achieves the minimal. This value is less than the limit value of the constant term when $\beta$ approaches 0.
In summary, our proofs mainly indicate:
(1) (**Consistency term mainly affects generalization in FL.**) From the perspective of excess risk (sometimes it could be considered as the test error), the consistency term mainly affects the generalization and dominates the excess risk.
(2) (**Relaxed initialization works better.**) From the excess risk perspective, Selecting a small positive $\beta$ works better than vanilla FedAvg theoretically, which could minimize the constant part in the dominant term. Both experiments and theoretical analysis could validate the efficiency of relaxed initialization.
## About the question "The workflow of the proposed method seems not correct.":
Thank you very much and we are sorry for this mistake. Line.10 should be moved outside the inner loop. After local training, it sends the optimized local state to the global server. And, Line.12 should be moved outside the middle loop. It aggregates the global model after the selected local clients send their local models to the global server. We will fix it.
## About the question of "How to obtain the value of $w_{i,K}^{t-1}$ is not clear.":
Thank you very much for pointing out this. Actually at each communication round $t$, if client $i$ is selected to be active, $w_{i,K}^t$ is trained by $K$ steps. If the client $i$ is not selected, $w_{i,K}^t=w_{i,K}^{t-1}$. The same operation could be found in FedDyn [1], it inherits the states of the local clients who do not participate in the training at the current stage.
Why does it work?
A client may not be selected after many communication rounds, but its last optimized local state still maintains the information of the local minimal. Local training makes each local model close to the local optimum (in most cases, local training could make the local training loss very low). Compared with adopting the current global model as initialization, the effort of the RI is to make the local initialization far away from the local optimum. Therefore, using the historical state is still useful. Though it may not be very accurate, it still works as a correction away from the local optimal solution.
Actually, we submit our code in the Supplementary Material. In the experiments, we just use the above policy to train the FedInit and achieve comparable results as many advanced methods. RI also helps many methods achieve higher test accuracy, i.e. SCAFFOLD and FedSAM. It could work as a light plug-in to further improve their performance as shown in Table.2.
[1] Federated Learning Based on Dynamic Regularization (ICLR 2021)
## About the question of "The key hyperparameter K is set inappropriately.":
We are sorry for this misunderstanding. **The $K$ in the theoretical analysis means the iteration. But we do 5 local epochs in the training. We will correct it in the next version**. We submit the code in the Supplementary Material and it can be checked there we use 5 local epochs for training.
We referred to the experimental setups from the baselines in our paper and chose an intermediate value of 5 to fairly compare their performance. We are sorry for this misunderstanding and we will correct the number of $K$ in our paper. We can use $E = 5$ to represent the local epochs in the hyperparameters introduction.
## About the question of "It would be better if there is a figure to illustrate the principles of the proposed method.":
Thank you very much for pointing out this. Since we took up a lot of space in the introduction of theoretical analysis, we did not provide the schematic diagram in the submitted version. We provide a figure in the one-page .pdf file to illustrate the principles.
### It is a pleasure to discuss this with you, which will help us to improve this work further. We explain and prove the concerns mentioned in the reviews. If there are any questions, we are happy to continue the discussion with you. Thank you again for reading this rebuttal.
---
Rebuttal Comment 1.1:
Title: Responses to answers
Comment: I have read all the reviews from other reviewers and responses from the author. I would like to improve my score.
---
Reply to Comment 1.1.1:
Title: Thank you very much for the reviews
Comment: Dear reviewer 9rh8, we appreciate you for finding the mistakes in the algorithm box and suggesting drawing the schematic to illustrate the principles, which helps us further complete this work. We will fix the mistakes and misunderstandings mentioned above in the next version. Thank you again for the review! | Summary: This paper proposes an efficient stage-wise initialization for the federated learning paradigm, named FedInit, which could be extended as a plug-in to several existing methods. It provides the theoretical analysis on both the convergence and generalization to illustrate how consistency term affects the FL. Experiments also show its efficiency in practical scenarios.
Strengths: 1. FedInit is a light and nimble technique that does not introduce extra communication costs in the FL framework. Compared with other algorithms, FedInit can almost achieve the same performance as SOTA algorithms while maintaining the same communication cost as the vanilla FedAvg. And, Relaxed initialization (RI) could be easily extended into other advanced algorithms as a plug-in and efficiently improve their performance.
2. This paper explores the impacts of global consistency constraints in FL, which is an essential and interesting problem in federated learning. It also indicates the relationships between generalization and optimization.
3. This paper provides a theoretical analysis of the excess risk to comprehensively explain how the dominant factors affect the final performance (test accuracy) throughout the training process. Ablation studies are conducted to validate the conclusion of the theoretical analysis on the hyperparameters, i.e. local interval $K$, coefficient of RI $\beta$.
Weaknesses: 1. The relaxed initialization (RI) looks like an extrapolation algorithm between the global model and the local models as $w+\beta(w-w_i)$. Several methods including the benchmarks of this paper, i.e. FedProx, and FedCM, are accustomed to applying interpolation methods to adjust local training to make the entire heterogeneous training smoother. So what is their essential difference? I think the author should add a paragraph to discuss it.
2. This paper involves a lot of analytical proofs, so I suggest that the author write a simple proof sketch for each sub-sections to facilitate reading.
3. In Table.2, RI helps to largely improve the performance of FedSAM and SCAFFOLD but shows limited help for FedCM and FedDyn. What is the main reason for this phenomenon? Does it imply that RI is subject to some existing algorithms? I think the author should add a paragraph to discuss it.
4. Some typos (for example):
(1) Line.150 "... ... its generality"
(2) Line.79 "client drift" and Line.151 "client-drift"
(3) Line.192 the sentence is difficult to read
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and affirmation of our work. We'll answer your questions one by one in the following, including some misunderstandings and some essential academic questions worth exploring. We are also very honored to share some of our understandings with you.
## About the question "What is the essential difference between the extrapolation and interpolation in relaxed initialization (RI)?":
Although they look alike, they have completely opposite effects in relaxed initialization. Here we can express these two formations in a unified form as $w_{init}=w+\beta(w-w_i)$, where $\beta\geq 0$ means extrapolation and $\beta < 0$ means interpolation. When we review the inconsistency due to local updates on the heterogeneous dataset, it means each local client is always optimized to achieve its local optimum. From this perspective, each optimized $w_{i}$ is always close to the local optimum.
Therefore, if we select the interpolation method ($\beta < 0$), from the above formulation, the local initialization state will get close to the local optimum than the vanilla global state, which means each local state overfits more on the local dataset than before. It will introduce a larger inconsistency. However, if we use the extrapolation, it means the local initial state will get far away from the local optimum than the vanilla global state. Since this initialization is used as compensation, the local state obtained after the next stage of training will also get close to each other. We also theoretically prove that using a small positive $\beta$ could reduce the constant part of the generalization error bound. That's why we use extrapolation as our proposed RI calculation.
We also test these experiments in the paper. In Table.3 (Line.348) in our paper,
| $\beta$ | -0.2 | -0.1 | 0 (FedAvg) | 0.01 | 0.02 | 0.05 | 0.1 | 0.15 |
| :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: | :----: |
| Accuracy | 64.70 | 67.47 | 72.53 | 72.82 | 73.45 | 74.65 | **75.95** | 44.47 |
Using a positive $\beta$ is better.
## About the question "I suggest that the author write a simple proof sketch for each sub-sections to facilitate reading.":
Thank you for pointing out this. Due to space limitations, we delete the original proof sketch in the submission version. We will add some proof sketches in the next version to further clarify our proof process.
## About the question "What is the main reason that RI shows different performance on different methods?"
The proxy-based methods may not be improved by RI. Because they utilize a proxy term $\Vert w - w_{Init}\Vert^2$ in the local objectives at each communication round, which are sensitive to the initialization state. For instance, FedDyn [1] uses a dynamic regularization term to correct the local objective constantly, which could achieve the same objective for different clients. It is a variant of ADMM which controls the dual variables. Due to the impact of the dual variable, if we directly use the relaxed initial state, the vanilla dual variable will not match the proxy term. In other words, it requires a more specific study on this ADMM-type method in FL. FedCM [2] uses the global update as a biased correction to update the local training, which has achieved a very high consistency. That is why the performance of the (RI + FedCM) is weak. The promotion for enhancing the consistency of FedCM from RI is limited by the correction term.
[1] Federated Learning Based on Dynamic Regularization
[2] FedCM: Federated Learning with Client-level Momentum
## About the question of "Some typos":
Line.150, we will fix it as "its generalization performance".
Line.151, we will fix it as "client drift".
Line.192, this sentence says that $G$ controls the heterogeneity level, and the local interval $K$ is limited by the $G$.
### It is a pleasure to discuss this with you, which will help us to improve this work further. We explain and prove the concerns mentioned in the reviews. If there are any questions, we are happy to continue the discussion with you. Thank you again for reading this rebuttal. | Rebuttal 1:
Rebuttal: **We are very grateful to all the reviewers for their valuable comments.**
We make individual responses to each reviewer to address the concerns they raised. Here we submit the one-page .pdf file which contains some experiment curves mentioned by review sALG and a figure to illustrate the principle of the proposed relaxed initialization technique which is mentioned by review 9rh8 and g4w5. Thanks again for the valuable comments of the reviewers.
Pdf: /pdf/4fa18d57b5833bee568ba6f35f01e873905930bd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Deep Momentum Multi-Marginal Schrödinger Bridge | Accept (poster) | Summary: The paper tackles the multi-marginal Schrodinger bridge space in phase space, by proposing a computationally tractable solver (DMSB). It leverages alternating Bregman projections to adapt the iterative proportional fitting algorithm in order to deal with multiple convex constraint sets.
The DMSB framework constrains the stochasticity of particle dynamics to their velocity component and therefore ensures smooth trajectories. The authors show how the proposed algorithm better describes scRNA-seq data by reconstructing lower-variance trends in the dynamics and (approximately) recovers the velocity field of particles.
Strengths: The paper builds on well-known results involving Schrödinger bridges (SBs) and convex optimization to practically solve the multi-marginal SB problem while ensuring smooth trajectories. It extends SBs [13] by considering random velocities, rather than random infinitesimal displacements. Furthermore, it generalizes the popular critically-damped Langevin diffusion (CLD) [12] framework to non-gaussian velocity marginals at the extremes.
A key contribution consists in performing 2 alternate optimization stages over the two sets $k_{\text{boundary}}$ and $k_{\text{bridge}}$, containing appropriately-partitioned constraints. The extension of the mean-matching objective to the phase space is also novel but relatively straightforward.
The content is well-motivated and appropriately described. Both mathematical results and algorithmic solutions are accompanied by remarks and informal insight, which help to understand their purpose and significance.
Several tables and diagrams further clarify the presentation by (i) precisely stating the notation used (Table 2), (ii) the problem description (Figure 1), (iii) the comparison against previous work (table 1), and the structure of the algorithm (Figure 2).
Weaknesses: The rationale behind the decoupling of constraints, i.e. the requirement that marginalizing joint densities over the previous/next timesteps yield the same distribution, could be made more explicit to the reader. In particular, it would be advisable to explain why the solution proposed by the authors avoids the “geometric averaging issue” (line 156) which affects instead the solution in [20].
I think that Proposition 4.5 should be followed by a more extensive discussion on the differences between constraints in Eqs. 9b and 10b: Why a conditional distribution appears in the former and a joint one in the latter?
**Miscellaneous**
In addition, I point out minor inaccuracies found while reviewing this draft:
- The quantities optimized in the formulas in the first row of Figure 1 are incorrect ($u$ and $a$ do not appear anywhere in the optimization objective).
- Proposition 4.1 looks more like a definition (and it is not even stated as a proposition in the work cited as a source).
- Wrong sections are cited on the right of Figure 2.
- There are spelling mistakes in lines 34, 66, 89, and 246.
- I would suggest refining the references to the Appendix, by clearly specifying if the link points to a proposition or a section (e.g., in line 128).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **Experiments**
It is unclear why the colors used in Figure 3 vary across the rows. Could it be possible to identify the position and velocity of each group of points by using the same colors in both graphs?
Contrary to what was stated in the text (“it is almost serendipity that DMSB can also learn the reasonable velocity trajectory without any access to ground truth”, line 230), I don’t find the approximate recovery of velocities surprising. Given the formulation of momentum Schrödinger bridges (lines 125-126), the algorithm must find a reasonable velocity field, in order to produce meaningful trajectories.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As stated by the authors, the benefits brought by DMSB are not detectable when learning simple trajectories in relatively low-dimensional spaces. It is therefore unclear whether many practical problems would benefit from it since this comes at the expense of doing without convergence guarantees.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer EhNn
We express our gratitude to the reviewer for their valuable feedback. The summary provided is accurate, and the questions raised are both intriguing and perceptive.
Kindly find below our itemized responses, organized in order to address each of the reviewer's concerns.
#### **1. In particular, it would be advisable to explain why the solution proposed by the authors avoids the “geometric averaging issue” (line 156) which affects instead the solution in [20].**
In Section 4 of reference [20] (page 7), the authors introduce an analytic solution that incorporates the geometric averaging of two densities in both the numerator and denominator. The result is based on the constrain set $K_i$ (using notation from [20]), which accounts for the boundary from two directions, leading to the inclusion of geometric averaging in the analytic solution. While it is feasible to estimate this value in the discretized space, addressing this matter in the continuous space poses a challenge, and currently, no optimal solution exists for it.
One of the key highlights of our paper is the successful avoidance of this issue. We achieve this by decomposing the $K_i$ constraints into separate $K_{boundary}$ and $K_{bridge}$ components, allowing us to address and resolve the problem effectively through our proposed approach.
#### **2. Why a conditional distribution appears in the former and a joint one in the latter?**
One can obtain similar results as in the proof of Proposition B.5, but the outcome differs from that presented in line 480. Due to the constraints imposed on $K_{boundary}$, it becomes necessary to decompose the joint distribution as $p_{t_i}(x_{t_i})$ and $\hat{q}(v_{t_i}|x_{t_i})$. However, in the optimization of $K_{bridge}$, there are no remaining constraints, indicating that the global minimization of the optimization will precisely match the reverse diffusion while simultaneously meeting the boundary condition.
Interestingly, even though we lack density information from the boundary condition in the reference path measure, we fortuitously possess samples from it, which can be leveraged for constructing the reference path measure for the subsequent iteration. To achieve this, it is essential to ensure that the joint distribution at the boundary is equal (i.e., eq. (42b)).
#### **3 Miscellaneous**
Thanks for the careful reading! We will update in the revision.
#### **4 It is unclear why the colors used in Figure 3 vary across the rows. Could it be possible to identify the position and velocity of each group of points by using the same colors in both graphs?**
We intend to utilize distinct colors to represent position and velocity in the revised version. The draft plot with the desired color scheme can be found in Figure 1 of the attached Rebuttal PDF file.
#### **5 convergence guarantees?**
Regarding theory, as discussed in the conclusion section, despite the satisfactory performance of our approach, we acknowledge that we currently lack theoretical convergence results. However, we find encouragement in the work of [1], which sheds light on the convergence proof even when the exact solution at each Bregman Iteration is not available. As [1] was almost concurrent, we hope we could consider analysis based on it as a direction for future research.
Regarding empirical performance, we mostly agree with reviewer; however, even in very low dimension, our algorithm still poses marginal advantage as evidenced by Table 5.
[1] Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. arXiv:2305.07247 | Summary: The paper aims at solving efficiently in high dimensions the multi marginal momentum Schrödinger Bridge, that is Schrödinger Bridge in phase space with multiple marginal constraints. They also tackle the issue of marginals constraints where only the positions are enforced. They reach this objective by proposing a new algorithm: DMSB.
The paper first focuses on adapting the iterative proportional fitting (IPF) algorithm to the momentum Schrödinger Bridge with fully specified marginals in phase space. They show that the IPF can be seen as an alternating maximization of log-likelihood.
Next they focus on the multi-marginal momentum Schrödinger Bridge. They propose a new formulation of the marginals constraints in order to apply their version of IPF as log-likelihood maximization via policy specification. Moreover they prove that the optimality conditions for the optimal bridge (under their new set of constraints) allows to sample the velocities when they aren't initially specified.
In order to train efficiently in high dimension they adapt a neural network parametrization of the policies known in the classical SB setting to the momentum SB setting.
Finally they test the the performance of DMSB on synthetic data. The performance on high dimensional data is higher than the baselines algorithm. Moreover they are able to recover velocity distributions which is a novelty.
The contributions can be summarized as follows:
- Extend the IPF to the momentum SB
- Introduce a new set of constraints for the Bregman iterations which allows for a neural net parametrization of the policies and sampling of velocities when they are not specified.
- Proposal and benchmark of a new algorithm DMSB made to solve momentum multi-marginal SB
Strengths: The presented use case of their algorithm is cell profiling over time. In that regard the proposed algorithm performs well above baselines and adresses the following technical challenges: sampling of the velocities, curse of dimensionality, partial informations on the marginals.
The sampling of velocities and curse of dimensionality are both tackled by a combination of proposition 3.1 and proposition 4.2/4.3. Proposition 3.1 proposes a novel and interesting formulation of the half bridge using log-likelihood. This is combined with a novel way to deal with the marginal constraints which is a key advance towards efficient high dimensional computing with respect to prior works.
Globally the article puts swiftly together multiple well known ideas and adds key ingredient (decoupling of marginal constraints, log-likelihood IPF) in order to produce an algorithm which removes limitations from prior algorithms. Those limitations being :
- lack of scalability
- need for fully specified velocities
- robustness to missing marginals
Finally the benchmark contains multiple metrics which is helpful in the high dimensional context since the curse of dimensionality renders some metrics less meaningful as they point out.
The explicit description of the algorithm in the appendix is appreciated because it makes the article whole: it contains the theoretical and practical aspects.
Weaknesses: Though inspired by a prior work the section on the neural nets parametrization lacks clarity. How does the log-likelihood minimization is tied to the mean matching objective and thus the loss Lmm? This link is succintly pointed out in the appendix B.5 equation (38).
The training scheme section goes rapidly over how the neural networks are trained. The figure 8 in appendix clarifies a lot the training. The explanation on the discretization method is unclear in section 4.5.
Finally the main advantage of the algorithm DSMB is the scalability however there are no complexity analysis of the algorithm with respect to dimensions and number of marginals.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How long does training take for scRNA-seq using DMSB compared to MIOFlow and NLSB ?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The author address the limitations of the works which are mainly the proofs of the theoretical convergence of the algorithm in the neural nets parametrization case of the momentum multi marginal SB. Proofs which are key to back up the interesting practical results.
They also sharply point out the limitation which is tied to their use case which is the impossibility for mmmSB to model for the death and birth of cells.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer 3Whf
We deeply thank the reviewer for all the comments. The summary is accurate and the questions are interesting and insightful.
Please kindly see our itemized replies below in order to address the reviewer's concern.
#### **1. How does the log-likelihood minimization is tied to the mean matching objective and thus the loss Lmm?**
Our understanding is that the minimization of the mean matching objective ($L_{MM}$) corresponds to optimizing the lower bound of the log-likelihood, as noted by reviewer 17Zf. This was proved in Proposition B.4.
#### **2. The training scheme section goes rapidly over how the neural networks are trained. The figure 8 in appendix clarifies a lot the training. The explanation on the discretization method is unclear in section 4.5.**
Owing to constraints on page count, Figure 8 has been placed in the appendix; however, it will be reintegrated into the main body of the paper should additional pages become available.
Regarding discretization, the simplest Euler-Maruyama method has been consistently employed throughout the entirety of this manuscript, although our approach can use better discretization as well.
Specific details pertaining to total time and timesteps can be referenced in Appendix Figure 7. We will further explain it in the main paper.
#### **3 Complexity (Dimension, marginal, time)**
Thank you for bringing up this matter! We concur that it is important to provide a clear disclosure of the algorithm's complexity within the paper so here we provide the empirical complexity results. These analyses will be integrated into the paper in the future revision.
It is noteworthy that our model demands more time and memory in comparison to previous approaches. However, it is worth highlighting that the superior results we achieve significantly surpass the baseline performance. Here are detailed numerical values. We record the wall time when all of evaluation criterion (MWD,SWD and MMD) does not drop empirically (empirically converges). In terms of sampling, we record the wall time for sampling one forward trajectory with 256 batch size.
All of the results are based on single-cell RNA dataset.
##### **Complexity w.r.t Dimensionality (marginals = 5)**
Dimension | train | Sampling
--- | --- | ---
|5 |24 mins|1.62 sec
|10 |25mins|1.63 sec
|50 |33 mins|1.84 sec
|100 |44 mins|2.01 sec
##### **Complexity w.r.t Marginals (dim=100), Remark: we keep the same time steps, that leads to almost same sampling time.**
Number of Marginal | train | Sampling
--- | --- | ---
|2 |32mins| 2.01 sec
|3 |33 mins|2.01 sec
|4 |38 mins|2.01 sec
|5 |44 mins|2.01 sec
##### **Time Complexity Comparison(dim=100, marginals=5)**
algorithm | DMSB | NLSB | MIOFlow |
--- | --- | --- | ---
|Training |44 mins|30 mins|20mins
|Sampling |2.01 sec|1.6 sec|2.12 sec| | Summary: In this paper, the authors present an algorithm, DMSB (Deep Multi-Marginal Momentum Schrödinger Bridge), to approximate solutions to an extension of the Schrödinger Bridge (SB) problem into phase space (mmmSB), where (i) marginal constraints on the position are given across time and (ii) stochasticity is only introduced on the velocity variable (which makes trajectories smoother). This framework aims at solving multi-marginal trajectory inference problems, i.e., inferring likely stochastic dynamics of particles on a time interval, given snapshots of them at certain time steps. This work proposes several contributions: (i) it extends the approach proposed by [1] (which solves a single-variable formulation of SB) to phase space (Proposition 3.1), (ii) it presents an efficient numerical scheme for the Iterative Proportional Fitting (IPF)-type algorithm presented by [2] and restated in Proposition 4.1, to solve mmmSB in practice, (iii) it shows great performance on realistic high-dimensional single-cell RNA sequencing ($d=100$) compared to previous baselines [3,4].
[1] Likelihood training of schrödinger bridge using forward-backward sdes theory, Chen et al., 2021.
[2] Multi-marginal Schrödinger bridges, Chen et al., 2019
[3] Manifold interpolating optimal-transport flows for trajectory inference, Huguet et al., 2022.
[4] Neural Lagrangian Schr$\backslash$" odinger bridge, Koshizuka et al., 2022.
Strengths: This work provides a well-motivated and theoretically well grounded adaptation of the SB problem to the multi-marginal setting with smooth trajectories. The authors derive a solid framework and present convincing numerical experiments on real-world data.
Weaknesses: - In my opinion, this paper is not easy to follow and some statements are provided without context or comment, which consists in the main limitation for me at this stage. Since I find the content pretty good, I will increase my score if the authors accept to be clearer in the main paper, by answering my questions and following my general comments given below.
- This paper lacks a theoretical result of convergence (at least non-quantitative, in the ideal setting where the neural networks would perfectly fit the drift terms of the forward and backward SDEs). It is not clear to me how the convergence result stated in [5] extends to the current setting.
[5] Iterative Bregman projections for regularized transportation problems, Benamou et al., 2015.
Technical Quality: 4 excellent
Clarity: 1 poor
Questions for Authors: - What is the reference measure $\xi$ used in the momentum-like setting (see Line 125 and Eq (6)) ? As far as I understand, it is the path given by the Brownian motion propagated along the velocity variable with the corresponding ODE $d x_t= v_t dt$. The authors should state it explicitly.
- In my opinion, $\bar{\pi}$ is the solution of the problem given in Proposition 4.4 without any other constraint. Does the solution given in the main paper actually include the constraint on the boundary ? What is $q_{t_0}$ in Line 187 ?
- I recommend the authors to specify the set of measures on which the optimization is performed in Propositions 4.1, 4.2 and 4.3: is it a collection of couplings or joint distributions over all the N states ?
- As far as I understand, the constraints given by $K_{bridge}$ enable to smooth out the trajectories of the particles along the whole time interval. To prove the efficiency of their method, could the authors compare their method with the procedure where the only constraints are given by $K_{boundary}$ (which should work too if I am not wrong) ?
- What is the definition of $\mathcal{L}_{reg}$ (Algorithm 4) ?
- How many Bregman iterations did you perform in your experiments ?
Major comments:
- I recommend the authors to give the proof of Proposition B.6 in the appendix, since it is not straightforward.
- For sake of clarity, the authors should introduce in the main paper a consistent notation to refer to couplings (eg, $\pi_{t_1, t_2}$) and joint distributions (eg, $\pi_{t_1:t_2}$) in order to avoid any confusion.
Minor comments:
- The authors should insist more on the (stochastic optimal control) formulation of the SB problem that they are solving in this paper. I don't think it is well highlighted in Figure 1.
- There is inconsistency of the control variable ($a$ or $u$) in Appendix B.1.
- I think there is a typo between $\hat{m}$ and $\bar{m}$ in Algorithms 3 and 4.
- For sake of clarity, I think that the system of PDEs in Eq (20) should be displayed in the main paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 1 poor
Contribution: 3 good
Limitations: Unlike [1] or [5] where experiments on Celeba dataset ($d\geq3072$) are performed, the dimension does not exceed 100 in the experiments of this paper. In my opinion, it is crucial to present experiments with such order of dimensionality to study the scalability of DMSB.
[1] Likelihood training of schrödinger bridge using forward-backward sdes theory, Chen et al., 2021.
[5] Score-based generative modeling with critically-damped langevin diffusion, Dockhorn et al., 2021.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer A2vV
We extend our sincere gratitude to the reviewer for the valuable comments. Please kindly find below our itemized responses, presented in an effort to address each of the reviewer's concerns.
#### **1. This paper lacks a theoretical result**
As discussed in the conclusion section, it is important to note that, despite achieving satisfactory performance, we currently lack theoretical convergence results for our approach. However, the recent work of [1] has provided valuable insights into the convergence proof even in scenarios where the exact solution at each Bregman Iteration is unavailable. As that was almost concurrent, we intend to explore and investigate this direction as part of our future research efforts.
[1] Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. arXiv:2305.07247
#### **2. What is the reference measure ?**
Thank you for bringing this to our attention. Indeed, your observation is accurate. The path corresponds to the trajectory obtained by propagating the Brownian motion along the velocity variable:
$dx_t=v_tdt$
$dv_t=g_tdW_t$
and it yields analytically available statistics which is aligned with the one proposed in [2] (see eq(12)). The results can be obtained by using the calculation in Section 6.1 of [3] after setting $g_t=\sqrt{\epsilon}$. We will add this and the detailed derivation in the future revision.
[2] Chen et al. "Multi-marginal Schrödinger bridges"
[3] Särkkä et al. "Applied Stochastic Differential Equations"
#### **3. Is the solution of the problem given in Proposition 4.4 without any other constraint. Does the solution given in the main paper actually include the constraint on the boundary ?**
Yes, the solution of the problem in Prop 4.4 is without any boundary constraint. Specifically,
- The solution of Proposition 4.2 in the main paper **does** include the constraint. The constraint is implied in the $\rho_{t_i}$ in the numerator.
- The solution of Proposition 4.4 in the main paper **does not** include the constraint. Hence, the boundry condition is implied in the $q_{t_0}$ in the numerator which is induced by the reference path measure eq(10b).
#### **4. I recommend the authors to specify the set of measures on which the optimization is performed in Propositions 4.1, 4.2 and 4.3?**
Apologies for any confusion. As presented in the paper, we have decomposed the optimization of Proposition 4.1 into two distinct propositions, namely Proposition 4.2 and Proposition 4.3. In the case of Proposition 4.2, the optimization process involves joint distributions $\pi_{t_i:t_{t+1}}$, considering two marginals at $t_i$ and $t_{i+1}$. Meanwhile, Proposition 4.3 entails the optimization of the joint distribution $\pi_{t_0:t_N}$.
#### **5.Could the authors compare their method with the procedure where the only constraints are given by $K_{boundary}$?**
We have conducted the experiments per suggestions provided in the review, and we can verify that the suggested algorithm to compare is functional, as evidenced by Figure 2 in the rebuttal PDF. However, importantly its performance is not on par with our proposed approach, particularly in terms of convergence speed and final performance.
#### **6. What is the definition of $L_{reg}$ (Algorithm 4) ?**
Sorry for the missing reference. $L_{reg}$ means the regularization mentioned in Ln 210.Specifically, it is the regularization term proposed in [4], which can potentially force the the condition of Prop.4.5 (see line 210).
[4] Ki-Ung Song, "Applying Regularized Schrödinger-Bridge-Based Stochastic Process in Generative Modeling"
#### **7. How many Bregman iterations did you perform in your experiments ?**
We are using 15 Bregman Iterations for all the experiments and 30 Bregman Iterations for Petal experiment.
#### **8. Provide the proof of Proposition B.6?**
As we are unable to update the revision at this stage, we provide here a sketch of the proof and will provide a full version in a future revision:
The results can be derived similarly to the proof of Proposition B.5, but with variations beginning from line 480. The constraint in $K_{boundary}$ necessitates decomposing the joint distribution into $p_{t_i}(x_{t_i})$ and $\hat{q}(v_{t_i}|x_{t_i})$. However, in the optimization of $K_{bridge}$, no constraints remain, implying that the global minimization of the optimization will precisely match the reverse diffusion while satisfying the boundary condition. Interestingly, despite lacking density information from the boundary condition of the reference path measure, we fortunately possess samples from it, which can be utilized to construct the reference path measure for the next iteration. To achieve this, it is imperative to ensure that the joint distribution at the boundary remains equivalent (i.e., as given by eq.(42b)).
#### **9 consistent notation to refer to couplings (eg, ) and joint distributions?**
we apologize for any confusion caused. As per your suggestion, we have rectified the notation in all sections of the rebuttal reply. The updated version will be included in the forthcoming revision of the document.
#### **10. There is inconsistency presented variable and PDE should be put in the main paper**
Thanks for the careful reading! It is indeed a typo and it is easy to fix. We will update the confusing typo and variable and display the PDE in the main paper once we can update the revision and if we do not exceed the page limit.
#### **11. Image experiments?**
Regrettably, due to time constraints and limited hardware resources (one Nvidia 3090Ti GPU), we are unable to conduct image experiments during the rebuttal phase. However, we will thoroughly consider this aspect for future investigations should the opportunity arise. Thank you for your understanding and consideration of our constraints.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
In particular, the answers to questions 1, 2, 4, 6, 7, 8, 9 and 10 are satisfying. I would like to insist that some technical details on the experiments such as the **time discretization** along the paths (mentioned in the rebuttal to Reviewer 17Zf), the **number of Bregman iterations** and the **regularization term** $L_{reg}$ should appear *clearly* in the revised version of the paper. Overall, the authors also agree that the presentation of the paper may be improved (setting, intuition for the results and their proofs, notation, ...) so that it is easier to read and to understand the content, which is a good point.
**About question 5**: I thank the authors for the experiment made on this aspect. I really think that it is of great interest (i) to prove that asymptotically, the constraint $K_{bridge}$ is not needed (if I understand correctly) but also (ii) to show the importance of this constraint in practice, since it smooths out the trajectories. I recommend the authors to include this discussion in the revised version of their paper, to bring even more intuition on their method.
However:
- **about question 11**: I understand the statement made by the authors, but I still think that the lack of experiments in *real* high-dimensional setting is the main limitation of the paper on the experimental side in its current version. I really think that it would make the contribution stronger if such experiments were conducted !
- **about question 4**: unfortunately, **I am still unconvinced** by the explanation given by the authors for the result of Proposition 4.4 and give details about it below.
Since the reference measure $\bar{\pi}$ is a path measure along the whole path (i.e., from $t_N$ to $t_0$), we have by definition that the path measures $\bar{\pi}\_{i : i+1}$ and $\bar{\pi}\_{i-1:i}$ have the same marginal in $t_i$. Then, $\bar{\pi}$ satisfies the constraint $K_{bridge}$ and therefore, the solution to the optimization problem in Proposition 4.4. should simply be $\bar{\pi}$...
Given the answer that you gave me (and given the answers given to other reviewers about Proposition 4.4), what I understand from your explanation (**and from Figure 8**) is the following: what you compute at this stage is $\mu^\star=\mu\_{t_0} (\bar{\pi}\_{|t_0})^R$, where $R$ is the time-reversal operator. As far as I understand, $\mu^\star$ is therefore the solution to the KL optimization problem with constraints $K_{bridge}$ and $K_{t_0}$. Hence, I think that the constraint $K_{t_0}$ is missing in Proposition 4.4.
- Could the authors explain to me where I am wrong ?
- Could the authors explain to me what $q_{t_0}$ refers to ? I did not find any definition of it in the paper...
Thank you !
---
Reply to Comment 1.1.1:
Title: Response to Reviewer A2vV
Comment: ## To Review A2vV:
Thank you for your attentive reading. These questions are right to the point and extremely helpful to us. Glad to hear that a good amount of previous concerns are resolved, and we are delighted to be able to discuss further.
1. **Details on discretization, number of Bregman iterations, regularization term should appear clearly in the revised version**: Thank you. We completely agree and will make sure that will be the case.
2. **I really think that it would make the contribution stronger if such experiments were conducted !**: We totally agree and this is something of high priority in our todo list for further work. Again we apologize for our insufficiency of computational resources.
Just to help us better design such experiments in the future, may we ask what could be a good motivation for considering multi-marginal SB in the pixel space, i.e. each marginal being a distribution of images? Any insight from the reviewer will be deeply appreciated and helpful for us to design the recommended experiments. Thank you!
4. **The solution to the optimization problem in Prop.4.4 should simply be $\bar{\pi}$:** The reviewer touched upon a subtle but very interesting complication (thank you!) We actually encountered it during the development of our algorithm and thought carefully about it. Interestingly, when formulating the reference path measure, the path measure inherently satisfies $K_{bridge}$ in an "automatic" manner. Nevertheless, as illustrated in Figure 2 of the supplementary PDF, the optimization outcomes fall short of the anticipated level of satisfaction. This discrepancy might be attributed to the absence of a smooth procedure, as astutely pointed out by the reviewer in the "about question 5" section. Recognizing this discrepancy, we set out to devise an additional optimization strategy for $K_{bridge}$ which is not only still theoratically aligned with Bregman Iteration but also heuristically smoothing out the trajectories as the reviewer mentioned. Consequently, the algorithm was refined to its current version. One can observe the improvement from Figure 2 in supplementary PDF. Per your suggestion in the response, we will certainly incorporate this discussion into the revised version.
5. **What you compute at the stage is $\mu^{*}=\mu\_{t0}(\\bar{\\pi})^R$, and therefore the solution to the KL optimization problem with constrants $K_{bridge}$ and $K\_{t\_0}$:** If we understood your notation correctly, we think you are referring to $\pi^{\star}=\\mu\_{t\_0}(\bar{\pi}\_{|t\_0})^R$ instead of $\\mu^{\star}=\\mu\_{t\_0}(\\bar{\\pi}\_{|t_0})^R$? If so, you are mostly correct. However, what we compute at this stage is $\pi^{\star}=p\_{t\_0}(\bar{\pi}\_{|t\_{0}})^R$ in your notation. $p\_{t\_0}\equiv \hat{q}\_{t\_0}$ where $\hat{q}\_{t\_0}=\int \\bar{\\pi}\_{t\_0,t\_N}d \mu\_{t\_N}$, which will not include the constraint $K_{t_0}$ (which corresponds to $\mu_{t_0}$ in your notation I think?), and it only includes the constraints $K_{bridge}$. Conceptually, in order to find the minimizer of $KL(\pi|\bar{\pi})$ within the constraints $K_{bridge}$, one will not need to include information of $K_{t_0}\in K_{boundary}$.
7. **What is $q_{t_0}$?** This should be $\hat{q_{t_0}}$ as being corrected in the reply above. Thank you very much for catching it and we are very sorry for the typo.
Any further questions are welcome. Your comments greatly helped us to improve the clarity of our work and are deeply appreciated! | Summary: The paper addresses the topic of multi-marginal trajectory inference in high dimensions using Schrödinger Bridge (SB). In particularly, the authors focus on the so-called momentum SB in phase space where the resulting trajectories in position space are smooth interpolations between the intermediate marginals. The motivation for this is that in real physical systems smooth trajectories are often more likely than abrupted changes in drift direction at intermediate marginals. After an introduction of the preliminaries of Schrödinger Bridge and Bregman iterations, the phase space formulation of SB is introduced, which this work then builds upon. By restructuring the problem formulations and introducing an efficient training scheme for the involved function approximators, the authors show how to solve the multi-marginal momentum SB problem in a computationally efficient way. This is then utilized in the experiments which comprise both a synthetic as well as a 100-dimensional real-world use-case.
Strengths: - Originality: The authors make several important technical contributions that allow for a computationally efficient solution of the (multi-marginal) momentum SB problem in high-dimensional phase spaces. Related work and the foundations the authors build upon are discussed and cited in the manuscript.
- Quality: All original propositions are supported with detailed proofs in the appendix (which I did not check). The experimental results show a convincing improvement over the state of the art. In the conclusion, limitations of the work are addressed.
- Clarity: The paper has a good introduction of the work it builds on. In addition, the novel contributions are clearly stated. The submission is well written and quite accessible on pages 1-4 where the relevant prior work is explained.
- Significance: The presented contributions are likely to enable machine learning practitioners to apply the momentum (multi-marginal) SB method to many interesting high-dimensional trajectory-inference problems. The advances over the state of the art are demonstrated convincingly on both an artificial and a real-world use-case. Further, the approach is also applicable if the marginal velocity distributions are not available, which is probably relevant in many real world use-cases where velocity information can be much harder to obtain the positional information.
Weaknesses: - Clarity: The novel contributions are likely difficult to understand for a non-expert reader. While proofs are presented in great detail in the appendix, a more intuitive explanation or interpretation of propositions 3.1, 4.2, 4.4, 4.5 is lacking. In addition, reproduction and/or extension of the results is probably a significant effort because the source-code is not provided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - My main suggestion to the authors would be to make the ideas behind their original propositions accessible to a wider audience by providing some intuition and explanation beyond the mathematical proofs.
- In addition, open-sourcing the code would reduce the entry barrier for practitioners for applying the method.
- In the abstract it reads "In this article, we extend SB into phase space". Is that claim really justified? I had the impression that SB in phase space was introduced in reference 20 and that the novel contribution is in the derivation of a computationally efficient way for half-bridge iterative proportional fitting via negative log-likelihood minimization.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations of the presented approach are discussed in the "conclusion and limitations" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer 5VcR
We deeply thank the reviewer for all the comments. The summary is accurate and the questions are interesting and helpful.
Please kindly see our itemized replies below for addressing the reviewer's concerns.
#### **1. a more intuitive explanation or interpretation of propositions 3.1, 4.2, 4.4, 4.5 is lacking.**
We express our gratitude for the thoughtful suggestion, and we intend to incorporate intuitive explanations as well as additional remarks in the future revision. These elucidations will serve to enhance the clarity and comprehensibility of our work, contributing to its overall accessibility. Namely, we are going to add clarification as following:
1. Remark for Proposition 3.1: Within each half-bridge IPF, the variable $Z_t$ (or $\hat{Z}_t$) is essentially learning the reverse-time stochastic process induced by $\hat{Z}_t$. This process can also be viewed as minimizing the approximated parameterized negative log-likelihood, as correctly pointed out by reviewer 17Zf.
3. Remark for Proposition 4.2: In order to matching the reference path measure in KL divergence sense, one need to match both the intermediate path measure eq.(9a) and the boundary condition eq.(9b). In the traditional two boundary SB case, matching the boundary condition is often disregarded due to either having a predefined data distribution or a tractable prior. However, in our specific case, as the velocity is not predefined, it becomes imperative to address this issue and optimize it through the application of Langevin dynamics.
3. Remark for Proposition 4.3: Following the same argument as the remark of Proposition 4.2, it becomes evident that, in this particular scenario, there is no need to account for the data distribution since there are no position constraints when optimizing with $K_{bridge}$. Consequently, the optimal solution will inherently align faithfully with the reverse diffusion and adapt to the boundary conditions imposed by the reference path measure.
4. Remark for Proposition 4.5: We indeed underexplained an important nontrivial fact (thank you so much for catching it): the unique structure of SB leads to a beautiful fact that the score is propotional to the sum of the forward and backward drift terms (kindly see Lemma B.3 and Line 448 for details). This facilitates the sampling of velocity. Specifically, the score function can be obtained using eq(11), as supported by the findings in eq(24). It can also be understood as the one realization of Nelson duality (see Lemma 1 in [1] and [2]).
[1] Vargas et al. "Solving Schrodinger Bridges via Maximum Likelihood"
[2] Nelson, Edward. "Dynamical theories of Brownian motion."
#### **2. In addition, reproduction and/or extension of the results is probably a significant effort because the source-code is not provided.**
Thanks for the suggestion. We have provided the code-base to the ACs according to the NeurIPS 2023 Rebuttal Instruction. Furthermore, when the paper could be accepted, we will make the code publicly available to facilitate reproducibility and promote transparency in our research. Thank you for your understanding, and we are committed to making the necessary improvements to enhance the quality of our work.
#### **3. In the abstract it reads "In this article, we extend SB into phase space". ,Is that claim really justified? I had the impression that SB in phase space was introduced in reference 20 and that the novel contribution is in the derivation of a computationally efficient way for half-bridge iterative proportional fitting via negative log-likelihood minimization.**
The reviewer is correct and our use of word was imprecise. We will replace it in the abstract by
"We extend the approach in reference [20] to operate in continuous space". In the main text we will continue this clarification with: "This circumvents the need for expensive space discretization which does not scale well to high dimensions. We also address the challenge of intricate geometric averaging in continuous space setup by strategically partitioning and reorganizing the constraint sets. Furthermore, we enhance the algorithm's computational efficiency by incorporating the method of half-bridge IPF."
in order to state the contributions of our work more precisely.
The helpful comment is very much appreciated! | Rebuttal 1:
Rebuttal: # To All Reviewers
We thank all reviewers for their valuable comments. We are excited that the reviews identified the novelty of our contribution, appreciated our experimental validations and acknowledging the significancy of our work.
The common criticisms rather came from insufficient complexity analysis and inacurate description for certrain concepts, about which we apologize.
In a revised version, we will integrate the clarifications suggested by all the reviews to make the paper easier to be understood, together with additional experimental validations suggested by the reviewers.
An itemized summary for each reviewer is listed below, which in our humble opinion resolves all raised concerns. We sincerely thank you for your time and help in all cases.
## Summary of Revision for all Reviewers
- In alignment with NeurIPS 2023's code disclosure guidelines, we have provided the Area Chair with the anonymous GitHub link to our codebase. Additionally, we have included the command line instructions necessary to replicate the results in the paper, along with an estimate of the expected waiting time.
- We will add the complexity analysis as the reviewers suggest.
- We will release the code when the paper is accepted.
- We will correct the inaccurate statement of the Likelihood as being suggested by reviewer 17Zf.
- We will update the notation for the coupling and the joint distribution. Specifically, $\pi_{t_i,t_{j}}$ will represent for the coupling of two marginals, and $\pi_{t_i:t_{j}}$ will be the joint distribution (path measure) as being suggested by reviewer A2vV.
- We will correct all the typos mentioned by reviewers. Thanks again for your careful and helpful reviews.
Pdf: /pdf/a39d1196172ef374e4f3bf0e19399581c703bad5.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors extend the diffusion Schrodinger bridge methodology to mult-marginal setting whereby each marginal is ordered sequentially, in addition, introducing momentum into the diffusion Schrodinger bridge framework. The method shows excellent performance on trajectory inference tasks.
Strengths: - The proposed work fills a gap in the literature, and the authors demonstrate how diffusion model based Schrodinger bridges [1] and implementation of [2] can feasibly be plugged into the multi-marginal setting [3].
- The method shows excellent performance on trajectory inference tasks, outperforming other methods. The approach feels quite natural.
[1] Bortoli et al https://arxiv.org/abs/2106.01357 \
[2] Chen et al https://arxiv.org/abs/2110.11291 \
[3] Chen, Multi-marginal Schrodinger bridges, 2019
Weaknesses: - The computational complexity of the algorithm is not discussed. It appears multiple (2N reverse diffusions?) must be computed per IPF step. This seems quite costly. I would appreciate discussion and more transparency on this.
- It is very difficult to clearly see what is the main algorithm and training procedure from the main text. I see algorithms are provided in the appendices but these are also not very well detailed in my opinion. The paper would improve in clarity and reproducibility if this was given more attention. This is especially true as no code is provided. Indeed, some training details like caching trajectories are mentioned in passing (as was introduced in [1], which should be cited) but it is not clear from this paper what the authors mean and without a reference it is difficult to know.
- Possible error in likelihood proof / explanations
In line 454, $\mathbb{E}[y_0]$ (and for $\tilde{y}_0$) is decomposed into $\mathbb{E}[y_T]$ minus the path integral of the SDE for $y$. This seems fine. However, my understanding is that unless the diffusion has fully converged to the prior (i.e. after convergence of the IPF, or with infinite regularization in SB), the term $\mathbb{E}[y_T]=\mathbb{E}[\log p(m_T)]$ is from the terminal point in the diffusion (which is not fixed until after convergence) hence must depend on the network parameters used to define the diffusion. This is commented on in Theorem 2 of [3]. This term is dropped in [2] and in this paper, which makes the likelihood interpretation incorrect for training each step of the IPF.
Instead further work is required, as in [3], to show that time-reversal is a bound of the likelihood and gives **approximate** likelihood training.
The training procedure will still work as it coincides with the time-reversal of a diffusion and the same training procedure as given in [1] but the likelihood explanation does not seem justified as it is currently written.
It appears other reviewers had similar concerns with [2], and also that the overall procedure really is not likelihood training but IPF with time-reversal where each time-reversal can be viewed as approximate likelihood. https://openreview.net/forum?id=nioAdKCEdXB
--------
I am conflicted by this paper. Whilst overall I think it is a good contribution methodologically and believe it can be justified with extra work. I have concerns about clarity and more importantly that this likelihood interpretation is misleading and possibly incorrect.
I would be happy to increase my score if the authors address this. I am happy to be corrected if I misunderstand the likelihood interpretation.
- [1] Bortoli et al https://arxiv.org/abs/2106.01357
- [2] Chen et al https://arxiv.org/abs/2110.11291
- [3] Song et al, Maximum likelihood training of score-based diffusion models, 2021
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Why use mean matching of [1] rather than divergence based training advocated in [2]? I agree they are theoretically equivalent up to an integration by parts but practically different. Similarly, the authors state the method is based on [2] (e.g. table 1 and line 116) but then if using the training objective given by [1] is it not more accurate to say the method is based on [1]?
- What is the complexity and training time for this method? It appears that one must perform multiple reverse time diffusions per IPF step.
- [1] Bortoli et al https://arxiv.org/abs/2106.01357
- [2] Chen et al https://arxiv.org/abs/2110.11291
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: - The authors discuss lack of theoretical convergence results
- Computational complexity and time to train is not discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer 17Zf
We would like to express our sincere gratitude for your valuable feedback and comments. We truly appreciate the time and effort you invested in assessing our submission.
Please kindly see our itemized replies below in order to address the reviewer's concerns.
#### **1. The computational complexity of the algorithm is not discussed. It appears multiple (2N reverse diffusions?) must be computed per IPF step. This seems quite costly. I would appreciate discussion and more transparency on this.**
It is a very vital question which we should include in the main paper. The short answer is, yes, it will have 2N reverse diffusions. However, it will only increase marginal computation cost compare with prior simulation-based work [1],[2]. The reason is ---
We discretize time into $S$ steps (where $S$ is set to 200 for the Petal experiment and 400 for other instances). The reference path measure $\\bar{\\pi}\_{t\_i:t\_{i+1}}$ is simulated using $S/(N-1)$ timesteps, given that we have $N$ marginals. To facilitate the training of $z_t^{\theta}$, we must simulate the reference path measure $\\bar{\\pi}\_{t\_i:t\_{i+1}}$ for all $i \\in [0,N-1]$. Consequently, the total number of simulation steps remains at $S$, which is not prohibitively large. However, it is important to note that additional simulations are necessary for the velocity component. As illustrated in Figure 7 of the appendix, this incurs an overhead of (2(two networks inference)x$N$ NFE) when compared to prior work [1],[2].
[1] Valentin et al. "Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling"
[2] Chen et al. "Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory"
#### **2. Unclear Algorithm and missing detail of caching trajectorties and source code**
We genuinely apologize for any confusion caused by the complex presentation of the algorithm. In the revised version, we will provide a clearer and more straightforward explanation of the algorithm to improve its comprehensibility for all readers. Additionally, we will elaborate further on specific techniques, such as the trajectory cache, which are commonly employed in prior works but may not be familiar to a broader audience.
We have provide the codebased to the ACs according to the NeurIPS2023 Rebuttal instruction. Furthermore, when the paper could be accepted, we will make the code publicly available to facilitate reproducibility and promote transparency in our research. Thank you for your understanding, and we are committed to making the necessary improvements to enhance the quality of our work.
#### **3. clarity and more importantly that this likelihood interpretation is misleading and possibly incorrect.**
Yes, it is misleading and we will add a remark in a revised version to improve the clarity. Our optimization is to find a path measure that best approximates (in KL) the time reversal of the reference path measure obtained in the previous iteration. This optimization approximates likelihood training.
#### **4.Why use mean matching of [1] rather than divergence based training advocated in [2]?**
This work is inspired by both [1,2] and we think [1] will lead to more scalable training objective since there is no expensive auto-grad computation. We will update the Line 116 to better state the importance of [1].
[1] Valentin et al. "Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling"
[2] Chen et al. "Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory"
#### **4.Complexity?**
Thank you for bringing this matter to our attention! We concur that it is important to provide a clear disclosure of the algorithm's complexity within the paper, so here we provide some real numbers. These reports will be integrated into the paper in the future revision.
It is noteworthy that our model demands more training time in comparison to previous approaches. However, it is worth highlighting that the superior results we achieve significantly surpass the baseline performance. Here are detailed numerical values. We record the wall time when all evaluation criteria (MWD,SWD and MMD) no longer drop (i.e. empirically convergent). In terms of sampling, we record the wall time for sampling one forward trajectory with 256 batch size.
All of the results are based on single-cell RNA dataset.
##### **Complexity w.r.t Dimensionality (marginals = 5)**
Dimension | train | Sampling
--- | --- | ---
|5 |24 mins|1 sec
|10 |25mins|1.6 sec
|50 |33 mins|2.0 sec
|100 |44 mins|2.02 sec
##### **Complexity w.r.t Marginals (dim=100), Remark: we keep the same time steps, that leads to almost same sampling time.**
Number of Marginal | train | Sampling
--- | --- | ---
|2 |32mins| 2.02 sec
|3 |33 mins|2.02 sec
|4 |38 mins|2.02 sec
|5 |44 mins|2.02 sec
##### **Training Time Comparison(dim=100, marginals=5)**
algorithm | DMSB | NLSB | MIOFlow |
--- | --- | --- | ---
|Training |44 mins|30 mins|20mins
|Sampling |2.02 sec|1.6 sec|2.12 sec|
#### **4. The authors discuss lack of theoretical convergence results**
As stated in the conclusion section, while our performance results are promising, we acknowledge that the theoretical convergence analysis is currently lacking. However, a recent work [3] addressed the convergence proof in scenarios where the exact solution at each Bregman Iteration is not readily available. As that was almost concurrent, we hope we could consider it as a direction for future research.
[3] Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. arXiv:2305.07247
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, it has addressed my concerns provided the stated changes are implemented.
I will raise my score to weak accept. | null | null | null | null | null | null |
DiffTraj: Generating GPS Trajectory with Diffusion Probabilistic Model | Accept (poster) | Summary: The paper proposes DiffTraj: a trajectory generation model based on probabilistic diffusion. The model is trained using real spatio-temporal trajectory data, and aims to generate new trajectories with similar characteristics. This is motivated by the purpose of preserving privacy information that may be present in the actual data. Like all diffusion-based models, the proposed model has a forward and a reverse process, and the latter has an U-Net component, whose importance is highlighted in the ablation studies. Contextual information such as the starting time and location can be provided as input to the model in an encoded form, using a neural network trained on the actual dataset. The influence of this contextual information on the generated data can be controlled by a parameter. It is shown that the trajectories generated by DiffTraj are statistically more similar to the original ones (in two datasets) compared to other trajectory generation models. It is also shown that the trajectories generated by DiffTraj can be used instead of the original trajectories to train models for downstream tasks such as inflow and outflow predictions from particular blocks in the cities.
Strengths: 1) A new model DiffTraj is proposed for trajectory generation. It uses the currently popular probabilistic diffusion framework, but modifies it to bring in encoded contextual information, and an U-Net component in the reverse process to predict the noise levels for every next step
2) Good generative performance is shown, where the trajectories generated by this approach are shown to be statistically more similar to the original ones compared to those generated by other methods
3) The authors explore various practical uses of their model, such as downstream tasks (eg. inflow/outflow prediction), transfer learning (training model on one city and adapting it to another) etc
Weaknesses: 1) The model considers all trajectories as IID and does not aim to utilize any notion of "trajectory clustering" into their model. [In their defence, the encoded contextual information may carry such clues]
2) The downstream task experiment is introduced almost as an afterthought, though such tasks seem to be the main motivation for the work. Also, these experiments seem to be incomplete [see questions below]
3) The model includes U-Net as an important component for the reverse process, and its success is highlighted through ablation studies. But the exact role or contribution of the U-Net is not discussed
4) The evaluation metrics used seem to be based mainly on spatial aspects, not temporal aspects
Overall, I like the broad idea of the paper. I am willing to improve my ratings if the authors can address my concerns.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) I would expect the trajectories in any city to have "clustering" tendencies, some origins and destinations maybe "hotspots", some road segments (eg. highways) maybe "hotspots", etc. Similarly, there may be peak hours of trajectories. Or, certain types of trajectories may be more common in certain hours of the day (eg. those heading to downtown office areas during morning hours). Do you think DiffTraj can produce these features in their simulations?
2) If not, is it possible to incorporate them into the model somehow?
3) Can we have evaluation metrics based on time and velocity, instead of only space-based measures (trip error, length error etc)?
4) In the downstream experiment, it is shown that various models perform almost equally when trained using the generated data instead of the actual data. But to really make the point, it should be shown that these models would have performed less well had they been trained with trajectories generated by some other baseline models
5) What is the exact role played by the U-Net in the reverse process? Could any other architecture not have played the same role?
6) If we want trajectories for a particular downstream task, can we get better results if we can use task-specific loss functions while training?
7) What do the figures in Fig2 convey? Are they obtained by overlaying a set of trajectories?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate your high-quality review and valuable suggestions. Due to space limitations, we merged some of the weaknesses and questions you mentioned. We also added a new **one-page PDF** of the results. Please kindly check it out. We clarify your concerns below:
> [W1]
Thank you for your insightful observation regarding the treatment of trajectories in our model. Indeed, we treat trajectories as IID to ensure broad applicability (e.g., transfer learning) across datasets. While our model does not explicitly employ "trajectory clustering," the contextual information encoding effectively captures the inherent similarities and differences among trajectories. As seen in **Fig. 10**, DiffTraj effectively simulates the original trajectory distribution with brighter main roads and darker surrounding areas. In addition, we also found that using only the model to generate scatter points, i.e., Diff-scatter can also present the non-IID distribution properties.
> [W2] [Q4]
Thank you for your constructive feedback regarding the downstream task experiment. We acknowledge the importance of this experiment and conducted the corresponding baseline experiments to address this:
|| MSE/RMSE/MAE (inflow) | MSE/RMSE/MAE (outflow) |
| -- | -- | -- |
| GAN| 6.78/2.60/1.76|5.97/2.44/1.68|
| VAE|6.79/2.61/1.76|5.81/2.41/1.66|
| Diffwave|4.62/2.16/1.52| 4.75/2.18/1.56|
| Original|4.42/2.10/1.50 | 4.64/2.15/1.53|
We train **GWNet** models using the trajectories generated by the baseline method and compare their performance with models trained using the original trajectories(Due to the limited space of the rebuttal, we give the GWNet prediction results).
Our results show that the training results for trajectories generated using DiffTraj are closer to the real context than using the baseline generation method, in line with the trends shown in Table I. We believe that these additional experiments and the revised presentation will provide a more comprehensive and clearer assessment of our modeling capabilities.
> [W3] [Q5]
In this work, UNet offers several distinct features:
1. Feature extraction and recovery: The down sampling (encoder) of the UNet captures contextual information and low-level features in the trajectory by reducing the spatial dimensions and increasing the depth (number of channels). Up sampling (decoder) of the U-Net increases the spatial dimensions while decreasing the depth. UNet can recover high-resolution features of the trajectory.
2. Skip Connections: These ensure the retention of fine-grained details, especially crucial during detailed data reconstruction.
3. Combination of local and global information: UNet can consider both local and global multi-level information.
These features enable UNet to precisely predict noise levels during the reverse denoising process for trajectory generation. **Fig. 13** and **Fig. 14** visualize this reverse denoising process. In addition, We also experimented with a non-UNet structure (**Diff-wo/UNet** and **Diffwave**), but they didn't produce satisfactory results.
> [W4] [Q3]
We appreciate your suggestion on time and velocity-based metrics. We tested the similarity between the generated results and the original using the **Kolmogorov-Smirnov statistic** at the time level and at the velocity level. The metrics yielded 0.94 and 0.93 for the Chengdu dataset and for the Xi'an dataset, 0.94 for both. This indicates that the generated trajectories can also simulate the distribution of the original trajectories well in terms of the temporal profile.
In addition, we visualize the results of both metrics for a better view. Please kindly see Figure 2 in the **rebuttal PDF**.
> [Q1] [Q2]
Thank you for your in-depth insights. We are pleased to confirm that our proposed model can exhibit specific patterns and features. We provide visual evidence of DiffTraj's capability to simulate these patterns in the **rebuttal PDF**.
For the hotspots you mentioned, it can be seen from **Fig. 2** or **Fig. 10** that the number of trajectories clustered on different road segments is different, thus revealing the hotspots of the road. In addition, in **Fig. 9**, we compare the clustering tendencies of trajectories within the city (we use a heatmap in this paper). We can clearly observe that the clustering tendencies of the generated trajectories maintain a high level of consistency with the original trajectories.
Regarding peak hours and time-specific patterns, our model can identify and simulate patterns associated with peak hours and speed attributes. This is demonstrated by a high number of peak-hour trips with slower speeds and fewer trips with faster average speeds during free periods (e.g., midnight).
> [Q6]
Thank you for raising this important point for the use of task-specific loss functions.
We think incorporating task-specific loss functions can be beneficial when the generated trajectories are intended for a specific downstream task. By optimizing for a task-specific loss, the model can be guided to generate trajectories that are more aligned with the requirements and nuances of the target task. For example, if the task is related to predicting pedestrian movement in a shopping mall, the loss function might emphasize trajectories that align with typical shopping behaviors or patterns. We believe it's a valuable direction for future work.
> [Q7]
Yes, Fig. 2 is obtained by mapping a set of trajectories onto the map (see **Fig. 10** for a larger view of the effect). This figure mainly wants to convey the following messages:
1. Compare the geographical distribution of different trajectory generation methods on a real map.
2. Since the trajectory drawn overlapped, this will lead to brighter colors where there are more trajectories, which can approximately reflect the non-IID distribution of all trajectories (main roads or side roads). In addition, this may partially answer your concerns in [W1] and [Q1] about the clustering of trajectories.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses to my questions. I am quite satisfied, and hence I am upgrading my rating. I suggest that the authors include the points about prominence of certain trajectories, and the comparison with GAN and VaE (for the inflow/outflow experiment), in the final version of the paper if accepted.
---
Reply to Comment 1.1.1:
Comment: We appreciate the opportunity to clarify the question. We certainly (if accepted) add the content of our discussion in a subsequent version. Thank you again for your review and insightful suggestions. | Summary: The paper provides an innovative diffusion probabilistic-based model to simulate realistic GPS trajectories. The main contributions are as follows: 1) The paper introduces a diffusion-based probabilistic model that captures spatio-temporal dependencies in GPS trajectories. This model allows for personalized trajectory generation, considering individual preferences and behaviors. 2) The proposed approach generates personalized trajectories for individual users, enhancing the realism and accuracy of the generated trajectories compared to traditional methods. 3) By incorporating real-world GPS data, the generated trajectories closely resemble actual user movements, making them suitable for various location-based applications. 4) The paper demonstrates the scalability of the proposed method, enabling the generation of large-scale trajectory datasets efficiently. 5) The authors perform an extensive experiment of the proposed approach, comparing it with existing trajectory generation methods, showcasing its superiority in terms of trajectory realism and diversity.
Overall, the paper presents a significant advancement in trajectory generation by introducing a diffusion probabilistic model and demonstrating its effectiveness in generating personalized and realistic GPS trajectories. The approach has the potential to impact various domains, including LBSs, transportation, and urban planning.
Strengths: 1.Generating realistic and personalized GPS trajectories has wide-ranging applications in various fields. The paper addresses this important problem and proposes a solution that has the potential to enhance applications such as LBSs, transportation, and urban planning. The diffusion probabilistic model offers a novel perspective and advances the state-of-the-art in trajectory modeling and generation.
2.The proposed approach generates personalized trajectories, allowing for a more accurate representation of individual user movements. This customization increases the applicability of the method in various domains. The generated trajectories closely resemble actual user movements, exhibiting realistic patterns and behaviors. This realism enhances the reliability and usability of the trajectories for real-world applications.
3.The authors present a well-designed study, including an extensive evaluation of their method using real-world datasets and appropriate statistical analysis. The experiments and comparisons with baseline methods demonstrate the robustness and effectiveness of their approach. The evaluation is thorough and includes relevant metrics, ensuring the validity and reliability of the presented results.
4.The proposed approach demonstrates scalability and efficiency in handling large datasets. This characteristic makes it suitable for real-time applications, where processing speed and performance are crucial. The work presented in the paper has the potential to advance the field of trajectory generation, providing a valuable tool for generating context-aware GPS trajectories. The paper's contributions can benefit various domains that rely on precise GPS data for decision-making and analysis.
5.The paper is well-written and organized, effectively conveying the methodology, experimental setup, and results. The logical flow and clear explanations contribute to the ease of understanding for readers.
Weaknesses: 1. Some details of experiment setup are not clearly provided, such as datasets, metrics, and baselines.
2. While the paper provides detailed descriptions of the proposed method, the absence of an openly available implementation or codebase limits reproducibility and further exploration by the research community.
3. It is better to discuss about the generality of the propose model. Can it be applied to other application domains?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How does DiffTraj handle potential privacy concerns associated with generating realistic trajectories?
2. Could you provide more details on the selection of baseline methods for comparison and why these specific methods were chosen? Are there any plans to release the implementation or codebase of the proposed method to facilitate replication and further research in trajectory generation?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The availability and accessibility of the datasets used in the experiments should be clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and perspectives. We respond to each of the points as follows:
> [W1]
Thank you for pointing out the experimental setup. We understand the importance of these details for the reproducibility of our work.
To address this, we have provided an in-depth description of the experimental setup, including details about the datasets, metrics, and baselines, in the **Appendix** of the supplementary material. We apologize for any oversight in not making this more explicit in the main manuscript.
Brief Overview:
1. Datasets: **Appendix B.1** contains a detailed description of the datasets used, including their sources, characteristics, and preprocessing steps.
2. Metrics: We have elaborated on the metrics employed for evaluation, providing both their definitions and the rationale behind their selection in **Appendix B.2**.
3. Baselines: The baselines chosen for comparison are detailed in **Appendix B.3**, along with explanations for their relevance and the context in which they were used.
4. Additional experiments: A number of additional experiments are provided in **Appendix C**, including downstream tasks, conditional generation, generating diversity, and more.
In light of your feedback, we will ensure that in the revised manuscript, we provide clear references and pointers to the relevant sections in the Appendix.
> [W2]
Thank you for emphasizing the importance of reproducibility and the availability of our implementation.
We have included the implementation code of our proposed method in the **supplementary material**. We apologize for any oversight in not making this more explicit. In light of your feedback, we will ensure that in the revised manuscript, we provide clear references and pointers to the relevant sections in the supplementary material where the code is available.
> [W3]
Thank you for emphasizing the importance of discussing the generality of our proposed model.Our model, while initially designed for trajectory generation in urban traffic mobility analysis. Here are our potential ideas for its wider application:
1. Temporal series data generation: The core of the trajectory data is a sequence of continuous GPS points. This means that with appropriate training data, the model can be adapted to other domains related to time series generation.
2. Spatial and temporal dynamics: The model's ability to capture both spatial and temporal dynamics is not limited to traffic patterns. Any application domain that involves spatial-temporal data, such as act analytics or even certain financial time series, could benefit from our approach.
> [Q1]
Thank you for raising this important issue. The privacy issue is also what motivates and centers our work. Here's how DiffTraj addresses these issues:
1. Learning trajectories distribution: While DiffTraj aims to generate realistic trajectories, it does not replicate exact real-world trajectories of individuals. Instead, it learns the general patterns and structures present in the data, ensuring that the generated trajectories are representative but not exact replicas of real-world movements. Moreover, it prevents the possibility of reverse engineering real trajectories from synthetic ones.
2. Noise generation: DiffTraj generates trajectories by stepwise denoising from random noise. By reconstructing trajectories from random noise during the reverse diffusion process, the model effectively decouples synthetic data from specific real data points. This ensures that the generated trajectories do not contain personally identifiable information or reveal sensitive location details, thus protecting their privacy.
> [Q2.1]
Thank you for highlighting the discussion on the selection of baseline methods. The selection of baseline methods was driven by several key considerations:
1. We prioritized methods specifically designed for or shown promise in trajectory generation or related tasks. This ensures that our comparisons are directly relevant and meaningful.
2. To provide a comprehensive evaluation, we selected methods that represent a diverse range of approaches to trajectory generation. This diversity allows us to understand the strengths and weaknesses of our method in relation to different strategies and paradigms.
3. To validate the effect of individual components of the model (e.g., UNet, Conditional module), we performed ablation experiments on the modules. Comparison with these methods validates the contribution of the individual modules of the model.
> [Q2.2]
Thank you for inquiring about the availability of the implementation code for our proposed method. We are committed to promoting reproducibility and further research in the field of trajectory generation. To this end, we have already provided the complete implementation of our proposed method. The codebase and necessary scripts and instructions are available in the **supplementary material** attached to our submission.
In the revised manuscript, we will ensure that we provide clear references and pointers to guide readers and reviewers to the provided code in the supplementary material. We appreciate your interest in our work and hope that the availability of our code will facilitate replication and further advancements in trajectory generation research.
> [Limitations]
Thank you for highlighting the importance of clearly stating the availability and accessibility of the datasets used in our experiments. We have provided comprehensive details regarding the datasets, including their sources, accessibility, and any relevant licensing information, in the supplementary material. We understand the significance of these details for reproducibility and further exploration by the research community, and we apologize for any oversight in not making this more prominent in the main manuscript. We appreciate your feedback, which has been instrumental in enhancing the clarity and thoroughness of our paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the response from the authors. My major concerns have been resolved with the answers. I think the paper quality is improved to a satisfactory level. I would like to update my assessment accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's comprehensive feedback and positive remarks. Your suggestion is invaluable and will guide our next steps in refining the paper. | Summary: The paper introduces a good approach for generating realistic GPS trajectories based on a diffusion probabilistic model. The paper addresses the challenge of generating personalized trajectories that capture both temporal and spatial dependencies while ensuring privacy preservation.
The paper proposes the DiffTraj framework, which combines a personalized transition matrix and a diffusion process to generate trajectories that closely resemble real-world GPS data. The personalized transition matrix captures the individual movement patterns of users, while the diffusion process introduces randomness and ensures diversity in the generated trajectories. Overall, the model is reasonable.
The paper presents a comprehensive evaluation of the DiffTraj framework using real-world datasets. The experimental results demonstrate that DiffTraj outperforms existing trajectory generation methods in various aspects. The proposed approach achieves high accuracy in replicating individual movement patterns while maintaining privacy by generating trajectories that deviate from the original data.
Strengths: 1 The paper introduces a unique approach to GPS trajectory generation by utilizing a diffusion probabilistic model, setting it apart from existing methods that may rely on different techniques or assumptions. The proposed diffusion probabilistic model captures both temporal and spatial dependencies, allowing for more accurate trajectory generation. This consideration enhances the realism and relevance of the generated trajectories. The diffusion probabilistic model captures both spatial and temporal dependencies, enabling the generation of trajectories that reflect the underlying dynamics of user movements.
2 Incorporating user preferences into the trajectory generation process adds a personalized aspect, enhancing the relevance and usefulness of the generated trajectories for individual users. This feature contributes to the practicality and applicability of the proposed method. The generated trajectories exhibit high diversity, capturing the variability in user behaviors and preferences. This diversity expands the range of applications where the trajectories can be utilized.
3 The paper provides a comprehensive comparison with baseline methods, showcasing the advantages of the proposed approach. The comparisons highlight the strengths and improvements of the diffusion probabilistic model for trajectory generation. The authors provide sufficient details and code availability, enabling other researchers to reproduce the experiments and verify the results. This transparency contributes to the reliability and integrity of the presented work.
4 The paper discusses potential applications of the proposed method beyond trajectory generation, such as urban planning, transportation analysis, and location-based services. This discussion broadens the scope of the paper and highlights its practical relevance.
Weaknesses: 1 The paper could provide more in-depth discussions on the assumptions made by the diffusion probabilistic model and their potential impact on the generated trajectories. Understanding these assumptions is crucial for interpreting and contextualizing the results.
2 No clear instruction is provided on how the hyperparameters are selected for DiffTraj.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1 How does the proposed diffusion probabilistic model capture both temporal and spatial dependencies in the trajectory generation process?
2 Can you elaborate on the generalizability of your approach to different geographic areas and datasets?
3 Can you explicitly state the limitations of your approach and discuss potential avenues for addressing them in future work? Acknowledging limitations and outlining future research directions would guide the community in further advancing trajectory generation methods.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The generalizability of the approach to diverse scenarios and datasets could be a limitation that warrants further exploration.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted that the reviewer found our motivations and ideas interesting and original. Thank you for your positive opinions and insightful comments.
> [W1]
Thank you for highlighting the importance of discussing our diffusion probabilistic model's assumptions.
1. Noise assumption: we assume that the observed data is a noisy version of potentially clean data. This allows the model to reverse the noise addition process, aiming to recover the original noise-free trajectory. In this way, the quality and nature of the generated trajectories are influenced by the assumed noise distribution. If the real-world noise is significantly different from the assumed noise, this may result in discrepancies in the generated trajectories.
2. Non-equilibrium thermodynamic assumptions: In urban mobility, we assume that "particles" are individuals moving in an urban environment and that the "medium" symbolizes the urban space itself. Through a step-by-step denoising process, the disordered particles eventually follow specific paths in the medium, i.e., road trajectories in the city.
> [W2]
Thank you for pointing out the hyperparameter selection. We apologize for the oversight and appreciate the opportunity to clarify this aspect.
The selection of hyperparameters for DiffTraj was guided by a combination of empirical experience and adherence to general settings commonly used in similar models and tasks. Here's how we approached it:
1. Empirical experience: We leveraged insights gained from previous work and experimentation to choose values (like input length) that align with the specific characteristics and requirements of our model.
2. General Settings: We also referred to general settings and best practices in the field, considering hyperparameters that have been shown to be effective in related models and tasks. This helped ensure that our choices were grounded in established knowledge and methodologies.
3. Iterative Refinement: While our initial selection was based on experience and general settings, we conducted iterative experimentation to fine-tune the hyperparameters. This process allowed us to identify the optimal combination that achieved the desired performance on our validation datasets.
We recognize the initial manuscript's lack of detail in this area and will expand on our hyperparameter selection process in the revised version, detailing our choices, their rationale, and any guiding references.
> [Q1]
Thank you for your feedback. This is indeed a central aspect of our model, and I'm pleased to explain how it is achieved:
1. UNet structure: Individual Blocks in the UNet structure capture the relationships between consecutive points in the trajectory, essentially capturing spatial dependencies. Meanwhile, its multilevel structure and skip connections allow for the fusion of global and local contextual information to capture temporal dependencies.
2. External factor representation: The model represents external spatial-temporal dependencies through conditional embedding, where spatial (start/end area, travel distance, average distance) and temporal (departure time, travel time, average speed, etc.) features are encoded.
3. Diffusion process: By iterative denoising the data through a reverse denoising process, the model captures the underlying spatial structure of the trajectories. In addition, the training objective of the diffusion probabilistic model aims to minimize the differences between the generated trajectories and the real trajectories in both spatial and temporal dimensions. This ensures the model learns to capture and reproduce the spatial-temporal patterns inherent in the training data.
> [Q2] & [Limitations]
Thank you for highlighting the importance. We appreciate the opportunity to elaborate on this.
DiffTraj is designed to capture patterns and behaviors inherent in trajectory data, which ensures the model can be adapted to different data sources and contexts. This means that the model can adapt to different geographic regions. Furthermore, Section 5.4 Transfer Learning in this paper validates this by showing that for a new city, only 5% of the data needs to be used for fine-tuning to enable the models to achieve significant performance in a new scenario. These results show that the DiffTraj model exhibits strong adaptability and generalization capabilities when applied to different urban scenarios.
To further validate the generalizability of our approach, we conducted experiments on a different dataset **Porto** dataset.
|Methods|Density|Trip|Length|Pattern|
|--|--|--|--|--|
|VAE |0.0121|0.0224|0.0382|0.763|
|TrajGAN |0.0101|0.0268|0.0332|0.765|
|Diffwave |0.0106|0.0193|0.0266|0.799|
|Diff-LSTM |0.0092|0.0141|0.0255|0.828|
|DiffTraj |0.0087|0.0132|0.0242|0.847|
Through the new experiment, we can see that DiffTraj achieves the best performance on this dataset, demonstrating its ability to adapt to different geographic regions and capture different trajectory patterns. These results underscore the model's generalizability across different datasets.
> [Q3]
Thank you for emphasizing the need to address our approach's limitations and potential future directions. Here are the key points:
1. Our method relies on raw data for synthesis, ensuring that generated trajectories mirror real-world patterns. This dependency implies that the quality and characteristics of the raw data can influence the synthesized trajectories. While it also means that any biases or anomalies in the raw data might be reflected in the synthesized data.
2. While our trajectory generation is innovative, it's computationally demanding. Despite this, our approach still represents a more cost-effective solution than real-world data collection costs.
While our work has limitations, it marks a notable step forward in trajectory generation. We are dedicated to refining our approach in future research, building on our established foundation.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thanks for the authors' detailed responses. The answers from the authors have fully addressed my questions about the paper. I have no more concerns with the technique. I am also content with the feedback to my question about "generalizability" that the authors supplement more datasets with good discussion. Taking all into consideration, I would raise my score. The information we discussed including the model clarification, generalizability, and potential future directions, are suggested to be put in a further version.
---
Reply to Comment 1.1.1:
Comment: We are truly grateful to the reviewer for taking the time to carefully assess our work and provide thoughtful feedback. Your suggestions, especially on model clarification and generalizability, have greatly improved our paper. | Summary: This paper adapts DDPM for trajectory generation within smart cities. The major contributions include the combination of different factors and the integration of some existing modules. The experiments over two real-world datasets can demonstrate the efficacy of the proposed model.
Strengths: 1. The paper is well-written and easy to follow.
2. The paper addresses an essential and important task in spatio-temporal data mining.
3. The experiments demonstrate the effectiveness of the proposed model over two real-world datasets.
Weaknesses: 1. Technical contribution is a bit weak. DDPM is popular in the era of AIGC. The paper adapts DDPM to trajectory generation but lacks novel or innovative designs. This drawback can be also seen in recent studies that borrow the idea of DDPMs to spatio-temporal data mining, e.g., DiffSTG [1]. By the way, this paper is very similar to DiffSTG, from model design to speed-up.
2. The paper suggests that incorporating the influence of external factors is challenging, but this claim may not be entirely supported. To the best of my knowledge, existing models can easily handle such influence, e.g., conditional GAN. As the proposed model simply integrates these factors as a condition in DDPM, it is unclear why capturing external factors would pose a significant challenge. Additionally, the paper lacks an ablation study that examines the impact of these external factors on the proposed model's performance.
3. To establish the generalizability of the proposed model, more real-world datasets from diverse applications and domains are needed. The exclusive use of datasets from DiDi in this paper limits the scope of the model's applicability and may not reflect its performance on other datasets.
4. The paper omits some important related work, such as [1, 2, 3]. Furthermore, [3] and [4, 5] (included in the paper) should be used as baselines for comparison to provide a comprehensive evaluation of the proposed model's performance.
Ref:
[1] Wen, Haomin, et al. "Diffstg: Probabilistic spatio-temporal graph forecasting with denoising diffusion models." arXiv preprint arXiv:2301.13629 (2023).
[2] Yuan, Yuan, et al. "Spatio-temporal Diffusion Point Processes." arXiv preprint arXiv:2305.12403 (2023).
[3] Feng, Jie, et al. "Learning to simulate human mobility." Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2020.
[4] Liu, Xi, Hanzhou Chen, and Clio Andris. "trajGANs: Using generative adversarial networks for geo-privacy protection of trajectory data (Vision paper)." Location privacy and security workshop. 2018.
[5] Zhang, Jing, et al. "Dp-trajgan: A privacy-aware trajectory generation model with differential privacy." Future Generation Computer Systems 142 (2023): 25-40.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please answer the questions in the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the importance of our work and for the well-written paper. We also appreciate the detailed comments posed by the reviewer. Please find below the point-to-point responses to the reviewer's comments.
> [W1]
Thank you for your feedback on our paper's technical contributions. While our work is indeed inspired by the success of DDPMs in generative models, adapting them to spatial-temporal trajectory data presented unique challenges due to differences like data dimensionality and temporal variations. Our Traj-UNet architecture is a testament to the innovations required to address these challenges.
While both our work and DiffSTG leverage DDPM-inspired models for the spatial-temporal domain, there are notable distinctions:
1. Purpose: DiffSTG focuses on traffic prediction, offering probabilistic traffic flow forecasts. In contrast, our work centers on trajectory generation, a generative task.
2. Performance: DiffSTG has limitations in prediction, whereas our model excels in trajectory generation, underscoring DDPMs' suitability for generative tasks.
3. Structural Designs: We explored various structures, including Wavenet and unconditional designs, eventually validating the efficacy of the Traj-UNet structure through rigorous experimentation.
Our primary goal with DiffTraj was to address privacy concerns in GPS trajectory data. By melding DDPM strengths with trajectory-specific innovations, we've aimed to balance leveraging established methods and introducing novel techniques.
> [W2]
Thank you for your comment. While models like conditional GANs can incorporate external factors, the challenge we want to emphasize is the one we have in applying DDPM to trajectory generation. We'll clarify this distinction in our revised manuscript. Our model is designed not only to capture these interactions, but we can also use it as a guide for trajectory generation (We also present the corresponding experimental results in **Appendix C.2** and **Appendix C.3**).
Regarding the ablation study, we've addressed this in the experimental section (**Table 1**). The performance metrics of **DiffTraj-wo/Con**, without external conditions, are notably inferior, underscoring the significance of these external factors. These factors not only enhance model performance but also guide trajectory generation. In Fig. 7 and Fig. 8 in the Appendix, we see that DiffTraj can specify the start and end areas of trajectories based on this. In addition, we can also customize the length, distance, travel time, etc., of the generated trajectories.
>[W3]
Thank you for pointing out the importance of the generalizability of our proposed model across diverse datasets. We wholeheartedly agree that relying solely on DiDi datasets could limit our model's perceived applicability. To address this concern, we have conducted additional experiments using the **Porto** dataset, allowing us to test our model's robustness and adaptability further. The main experimental results and metrics on this dataset perform as follows:
|Methods|Density|Trip|Length|Pattern|
|--|--|--|--|--|
|VAE |0.0121|0.0224|0.0382|0.763|
|TrajGAN |0.0101|0.0268|0.0332|0.765|
|Diffwave |0.0106|0.0193|0.0266|0.799|
|Diff-LSTM |0.0092|0.0141|0.0255|0.828|
|DiffTraj |0.0087|0.0132|0.0242|0.847|
Our model still achieves superior performance on the new dataset, which is comparable to its performance on the DiDi datasets. These results reinforce the generalizability of our model across different datasets and domains. We believe that including the **Porto** dataset and the DiDi datasets provides a more comprehensive evaluation of our model's performance and applicability.
> [W4]
Thank you for your feedback. We will revisit the works you mentioned ([1, 2, 3]) and acknowledge their relevance to our study. We will incorporate a discussion of these works, highlighting their contributions and differentiating our approach from theirs. This will provide readers with a more complete understanding of the state of the art and the novelty of our proposed model.
We appreciate your suggestion to use [3] and [4, 5] as baselines for comparison. Including these works in our evaluation will indeed provide a more comprehensive assessment of our model's performance.
Among them, the TrajGAN in our work is exactly the model proposed in the literature [4]. The main difference between the literature [4] and the literature [5] is that the former uses CNNs, and the latter uses LSTMs. In this regard, we have reproduced both of them, and the results are as follows:
||**Chengdu**||||**Xi'an**||||
|--|--|--|--|--|--|--|--|--|
||**Density**|**Trip**|**Length**|**Pattern**|**Density**|**Trip**|**Length**|**Pattern**|
| TrajGAN [4] |0.0125|0.0497|0.0388|0.502 |0.0220|0.0512|0.0386 | 0.565|
| DP-TrajGAN [5] |0.0117|0.0443|0.0221|0.706 |0.0207|0.0498|0.0436 | 0.664|
However, we want to clarify the core differences between our approach and the methodology in [3]. In [3], the trajectory data is converted into a grid-based representation, and they focus on simulating human behavior by predicting these grids. This grid-based approach inherently changes the nature of the data and the problem, making it a distinct task from the continuous trajectory generation in our work. Our model is designed to handle and generate exact trajectory data without converting it into a discrete grid format. Given these foundational disparities, a direct comparison might not be meaningful.
Nevertheless, we value the contribution of [3] to human mobility simulation and will highlight its significance while distinguishing our methodology in the revised manuscript. We trust this clarifies our stance and appreciate your feedback. | Rebuttal 1:
Rebuttal: We appreciate the insightful comments and perspectives of the reviewers, and the attached figures and tables of results are included as a supplement **PDF** to the rebuttal.
Pdf: /pdf/8d431a968a28b3b581e197c642b4e7eb04ee6906.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension | Accept (poster) | Summary: This paper generalize existing inconsistency results to non-interpolating models and more kernels to show that benign overfitting with moderate derivatives is impossible in fixed dimension.
Moreover, this paper proves that interpolation with spiky-smooth kernels can be consistent and such kernels can be induced by certain activation functions.
There are also experiments supporting the authors' claims.
Strengths: The paper is well-written and easy to follow.
It also provides us with novel understandings of the benign overfitting phenomenon: the smoothness of the estimators, and not the dimension, matters.
In particular, by considering spiky-smooth kernels, the authors find that interpolation with kernels can be consistent in fixed dimension.
This new result is very interesting.
On the technical side, the authors improve previous inconsistency results for RKHS equivalent to Sobolev RKHS of smoothness $s > d/2$ on the sphere $\mathbb{S}^d$. The techniques here are solid.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can Theorem 5 be extended to $H^s(\Omega)$, $\Omega \subset \mathbb{R}^d$ ?
If not, could the authors explain the difficulties here?
2. Is it possible to derive optimal rates for interpolation with spiky-smooth kernels?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank you for carefully reading our paper and for providing detailed feedback.
**Question 1: Can Theorem 5 be extended to $H^s(\Omega)$, $\Omega\subset \mathbb{R}^d$?**
There is a technical obstacle to generalizing Theorem 5 to open bounded sets $\Omega \subseteq \mathbb{R}^d$, which could perhaps be overcome with additional technical contributions. This is discussed before Proposition F.12 in the appendix. Our proof uses that if $H_k \cong H^s$, then the RKHS $H_{k_*} \cong H^{2s}$, which is true for dot-product kernels on the sphere as shown in Proposition F.12. This allows us to apply kernel matrix eigenvalue bounds for Sobolev kernels to $k_*$. However, for kernels on open bounded sets $\Omega \subseteq \mathbb{R}^d$, this is in general not true, as the space $H_{k_*}$ will be slightly smaller than $H^{2s}$ due to some extra ``boundary conditions''. Intuitively, the sphere is a manifold without boundary and therefore less problematic in this regard. However, with a better understanding of the spaces $H_{k_*}$ for such $\Omega$, it might be possible to extend our proof to this setting. We will add a short discussion on this issue in the updated version of our manuscript.
**Question 2: Is it possible to derive optimal rates for interpolation with spiky-smooth kernels?**
Yes, very good point. Since minimum-norm interpolation with spiky-smooth kernels mimics ridge regression with smooth kernels, the rates from kernel ridge regression can be easily transferred. We agree that this is a valuable addition to the paper and will include this result in the updated version. More details can be found in the global response. | Summary: This work studies the generalization behavior of overfitting methods in terms of the smoothness of the estimator, showing that only non-smooth estimators can interpolate benignly. They give a discussion of this result in the context of NTKs and their corresponding infinite-width architectures.
Strengths: Originality and quality: this work seems to shed some new light on a well-studied problem. I'm not very familiar with background results on asymptotic risk w.r.t. function smoothness, so it's difficult for me to assess the first several theorem statements. I'm quite familiar with the latter results on NTKs, and to my knowledge this observation about spikiness in activation functions is new, and I find it well-explained.
Clarity: the paper seems pretty clear and well-written.
Significance: the math here is nice, and it seems like this might lead to some more clarity regarding exotic activation functions.
Weaknesses: On significance: it's sort of unclear to me how this matters, or where we go from here. For example: the results of Mallinar et al. suggest that early stopping in nets (or including a ridge parameter in KRR, as shown by prior work too) is enough to make fitting consistent. These questions like "how do we design our model so it overfits benignly?" seem like they're actually missing some motivation -- why do we need to do that? Can't we just use a ridge parameter or optimal stopping?
The results around spike-inducing kernels seem quite hacky. The fact that the spike has to get smaller with dataset size makes it clear that this is just effectively adding a ridge. (The authors acknowledge this, but it's still not a surprising conclusion in my view.) It's also not useful -- the authors do not try to train their Hermite-polynomial nets, and (based on my own attempts to do such things) I suspect this may be because they're unstable, even at moderately large width!
As stated above, I'm unfamiliar with prior art around function smoothness and overfitting behavior, so there's a fair bit of uncertainty in my assessment here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors argue that the key object to consider is estimator smoothness, not input dimension. Given that there exists a body of results that do find that input dimension is important, is it clear how to reconcile the author's new results with these old ones? For example, do the smoothness requirements get more relaxed as dimension increases?
Note: I think an activation fn corresponding to the Gaussian NNGP kernel can be written down in closed form. (I do not know if this is true for the NTK, and would be curious to know if it can.)
Note: a high-frequency sinusoidal activation function can also give you a spiky kernel!
Note for future research: this smooth + spiky kernel thing seems like a hack. However, the results of Mallinar et al. suggest that if you could write down any kernel with an eigendecay like $\lambda_i \sim i^{-1} \log^{-\alpha} i$ for $\alpha > 1$, you'd get a consistent estimator without the need to scale down the kernel. It seems plausible to me that by taking a sort of exponent-equals-zero limit of Bietti and Bach's analysis, you could construct a kernel that actually gives you this decay (even on the unit circle), and it'd be consistent while being (a) a continuous function and (b) not needing to be scaled with dataset size. Could be an interesting thing to look into.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for carefully reading our paper and for providing detailed feedback.
**Remark 1: Benign overfitting lacks motivation.**
The considerable interest in benign overfitting is mainly motivated by the great successes of overfitted NNs. In this sense, overfitting is motivated by empirical results, even though this overfitting might not be consistent or benign in a theoretical sense. A more detailed discussion of related work is provided in Appendix A. Since the community does not yet understand precisely why overfitting is often beneficial for NNs, it seems premature to dismiss approaches that can enable benign overfitting. On the other hand, it is not our main goal to improve overfitting in NNs, but rather to understand the conditions under which benign overfitting can occur. To the best of our knowledge, this paper is the first to show benign overfitting of a neural model in the challenging regime of low dimension, and rigorously establishing both consistency and inconsistency results for kernel regression posed several technical challenges. In the future work section, we acknowledge that we have not yet provided a neural method that is meant for practical use.
**Remark 2: Spiky-smooth kernels seem hacky and not surprising.**
Exploiting the connection to ridge regression is precisely what enables optimal nonparametric convergence rates. We will include this result in the updated version. More details can be found in the global response. Constructing our spiky-smooth kernel sequence as we do allows us to plug in Gaussian kernels and apply Theorem 3.1 of Simon et al. (2022). Spiky-smooth kernels are a simple idea and might not be surprising in hindsight, but we would argue that the consistency properties they can achieve are not obvious, and the simplicity is helpful in understanding their properties and the resulting implications, for example in deriving optimal rates for benign overfitting.
**Remark 3: We do not train Hermite polynomial nets.**
The Hermite polynomial basis only serves us as a tool to understand which activation functions give rise to benign overfitting. Since the shifted sine function emerges as a more practical solution, there is no need to train neural networks with Hermite polynomial activation functions. When training fully connected networks with the spiky-smooth activation function in Figure 1, we observe quite stable training trajectories at large widths. However, we acknowledge in the future work section that the spiky-smooth activation functions do not yet scale to large datasets. Identifying the right architectural inductive bias for moderate widths also poses an exciting opportunity for future work. Maybe this involves more than purely adapting the activation function. The goal of this theoretical study is rather to rigorously establish when and how benign overfitting with kernels and neural networks is possible in fixed dimension.
**Question 1: What is the interplay between smoothness and input dimension?**
Let us discuss this question from several angles.
1) **Order of smoothness.** Other papers find that input dimension is important for a "typical'' sequence of kernels $(k_d)_{d \in \mathbb{N}}$. Our results show that the dimension is not the single decisive quantity because one can also achieve benign overfitting in fixed dimension, even in Sobolev RKHS of arbitrary smoothness. Mallinar et al. (2022) consider, for example, Laplace-like kernels and semi-rigorously show that they overfit more benignly (tempered with decreasing constant in $d$) in high dimension. However, it is unclear how to compare their smoothness across dimensions: The Sobolev space smoothness $s = (d+1)/2$ increases with $d$, but the eigenvalue decay $\lambda_n = \Theta(n^{-(d+1)/d})$ gets slower.
2) **Smoothness in the sense of the magnitude of derivatives.** Here we refer to the global response. In short, the spike bandwidth is allowed to decay slower in high dimension. The order of smoothness of the RKHS is irrelevant, we achieve benign overfitting even by composing Gaussian kernels.
**Remark 4: An activation function corresponding to the Gaussian NNGP kernel can be written down in closed form.**
Thank you for pointing this out. We will include the derivation of Eq. (7) in the updated version. It just requires $e^{2xz-z^2}=\sum_{n=0}^\infty z^n H_n(x)/n!$ with $z=\pm i\sqrt{1/\gamma}$ and some elementary calculations. When choosing all signs in the Hermite expansion positive, the corresponding activation function was stated in closed from in Eq. (H.2). The problem with the spiky-smooth activation functions is that the corresponding Hermite coefficients are of the form $\pm\sqrt{a_n + b_n}$, where $a_n$ and $b_n$ are the Taylor coefficients of the smooth and spiky parts, respectively. So we cannot just add up the spiky and smooth activation functions. We do not see how one could write Gaussian NTK activation functions in closed form.
**Remark 5: A high-frequency sinusoidal activation function induces a spiky kernel.**
Yes, a high-frequency sinusoidal activation function induces a spiky kernel. This is the idea behind adding it to a low-frequency activation function to get a spiky-smooth kernel. Similar to the small bandwidth case in the proof of Theorem 1, a pure spiky kernel will not overfit benignly.
**Remark 6: Consistency with a single kernel function.**
Indeed this is an interesting question, though it might well be that the resulting convergence rates would be terrible. We have mentioned this idea in Remark G.4.
**References:**
Mallinar et al. "Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting", NeurIPS 2022.
Simon et al. "Reverse engeneering the neural tangent kernel", ICML 2022.
---
Rebuttal Comment 1.1:
Title: Response to response
Comment: I thank the authors for their detailed and thoughtful response. I'm more convinced that these results will prove interesting and clarifying for the community, and I've thus increased my score slightly. | Summary: This paper extends previous results on the inconsistency of ridgeless kernel regression in fixed dimension by showing that non-interpolating estimators whose norm grows comparably to the minimum-norm interpolator are also inconsistent. On the other hand, it is shown that so-called spiky-smooth kernels whose derivatives grow with the number of samples are able to interpolate training data consistently. These results are further specialized to Neural Tangent Kernels and Neural Network Gaussian Processes, and experimental results support the theory.
Strengths: * The paper is generally well-written with understandable and insightful theorem statements and proof ideas, even for non-experts.
* Modifying the kernel to achieve consistency in fixed dimension via inspirations from high-dimensional benign overfitting seems novel and interesting.
* The assumptions seem to be transparent and clearly stated in the main text.
Weaknesses: * Having explicit rate estimates in addition to asymptotic consistency guarantees could provide further insight into how the spiky-smooth kernels/activations must be designed to achieve optimal performance.
* While mathematically exciting and valuable, the results might have limited applicability in modern settings with typically high-dimensional datasets.
* The results concerning the NTK can be applied to networks in the kernel regime (a.k.a. "lazy training"), which can only provide a partial picture as there is increasing evidence that in certain settings, neural networks can generalize better in the "feature learning" regime, see e.g. [1].
[1] Malach et al. "Quantifying the benefit of using differentiable learning over tangent kernels." ICML 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * I believe it can be helpful to add a discussion on the possibility of initializing the weights of a neural network such that training remains in the NTK regime, while the initial predictor is 0, which is an assumption throughout the work.
* In Equation (2) of Theorem 1, is there a hidden dependence on $\sigma$ in $c$? In particular, is this intuition incorrect that in the noiseless setting of $\sigma = 0$, consistent estimation ($c = 0$) must be possible?
* In line 232, it is mentioned that Section 5 achieves consistency by violating Assumption (N). To my understanding, it seems like Section 5 takes advantage of having the norm (of any estimator) depend on the number of samples. On the other hand, the specific statement of Assumption (N) seems to still hold in Theorem 7, as it is stated for the minimum-norm interpolant (hence $C_\text{norm} = 1$).
Minor questions/comments:
* In line 88, it is mentioned that training infinitely wide NNs with gradient flow corresponds to learning with the NTK. Perhaps it is worth mentioning that this is only true under a certain scaling that leads to the "lazy training" regime [2], and other smaller initialization scalings that lead to the "mean field/feature learning" regime can have better generalization performance [1].
* In line 95, rotation-invariant kernels are suddenly discussed without prior context. It could be useful to mention their importance given that under standard initializations the NTK is rotation-invariant.
[1] Malach et al. "Quantifying the benefit of using differentiable learning over tangent kernels." ICML 2021.
[2] Chizat et al. "On Lazy Training in Differentiable Programming." NeurIPS 2019.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please see "Weaknesses".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for carefully reading our paper and for providing detailed feedback. We agree that all of your suggestions are important clarifications and will include them in the updated version. They will certainly improve the updated version of the paper. Concretely, we will:
- add a discussion on how we configure our NNs to stay close to the NTK limit, with an initial predictor that is zero, and mention feature learning limits.
- add the sentence "The above kernels as well as NTKs and NNGPs of standard fully-connected neural networks are rotationally invariant." before mentioning rotation-invariant kernels.
**Remark 1: Explicit convergence rates.**
We agree that providing explicit convergence rates constitutes a valuable addition to the paper. Since the minimum-norm estimator of our spiky-smooth kernel sequence mimics kernel ridge regression, it achieves optimal convergence rates for Sobolev target functions. See the global response for more details. We agree that an analysis beyond the asymptotic limit will constitute an interesting line of future work, in particular to understand when and how finite neural networks can overfit benignly on small datasets.
**Remark 2: Limited applicability on high-dimensional datasets.**
Note that the task of benign overfitting is more challenging in fixed dimension, and that this paper is the first one to show that it can be achieved by a neural network. We show that standard ReLU networks (in the NTK parametrization) cannot overfit benignly in this setting and present first ideas on how to enable nearly optimal generalization while overfitting to noise.
**Remark 3: The results only cover the kernel regime.**
We agree that analysing feature learning neural networks constitutes an interesting line of future work beyond the scope for this paper. We will add a sentence in the future work section (see global response).
At least for kernel regression, we will show in the updated version that the minimum-norm interpolant w.r.t. our spiky-smooth kernel sequence achieves optimal convergence rates for Sobolev spaces $H^s, s > d/2$. Hence, in fixed dimension, rate-optimal benign overfitting can already be achieved without feature learning.
From a theoretical standpoint, it is not clear which function classes to study that represent real datasets more accurately to be able resolve the question how to design feature learning that outperforms kernel regression.
You can find another perspective in our corresponding answer to Reviewer P771.
**Question 1: Hidden dependence on $\sigma$ in $c$?**
Without adaptive bandwidths, as in the formulation of Theorem 1 and for NTKs and NNGPs, or in the large bandwidth case, the dependence is of the form $c\sigma^2$. We will make the dependence on $\sigma$ explicit in the updated paper. However, for the more general bandwidth-independent version in Theorem D.1, the low-bandwidth case lower-bounds the estimator bias instead of the estimator variance, and this bias should in principle not depend on $\sigma^2$ but rather on something like $||f^*||$.
**Question 2: Consistency in the noiseless setting?**
We mostly study the estimator variance due to label noise in our lower bounds, which of course is zero when $\sigma^2 = 0$, but this does not say anything about the estimator bias. Whether kernel interpolation is consistent in the noiseless setting is, to our knowledge, an open question. While the kernel interpolation community often studies deterministic covariates $\boldsymbol{x}_i$, it should be possible to get consistency under relatively weak assumptions from classical theory (see e.g. Wendland, 2005) as long as $f^* \in \mathcal{H}_k$. There are also some results for functions $f^*$ in Sobolev spaces that are slightly less smooth than the RKHS (e.g. Theorem 4.2 in Narcowich et al., 2006), but these results require stronger assumptions on the separation distance of the $\boldsymbol{x}_i$, and it is unclear to us if this would be satisfied for random samples.
**Remark 5: Violating Assumption (N).**
Indeed we have $C_{\mathrm{norm}} = 1$ when comparing to the RKHS norm of the corresponding spiky-smooth kernel that depends on the number of samples, but $C_{\mathrm{norm}}$ grows unbounded when comparing to a fixed Sobolev norm. In the updated version, we will replace the sentence "The key is to violate Assumption (N) and allow for quickly exploding derivatives" by "The key is to violate Assumption (N) for every fixed Sobolev RKHS norm $||\cdot||_{H_k}$ and introduce an inductive bias towards learning spiky-smooth functions".
**References:**
Fischer and Steinwart "Sobolev Norm Learning Rates for Regularized Least-Squares", JMLR 2020.
Narcowich et al. "Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions", Constructive Approximation, 2006.
Wendland ``Scattered Data Approximation'', Cambridge University Press, 2005.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. I will maintain my positive evaluation of the work. | Summary: In this paper, the authors studied the problem of benign overfitting for kernels and wide neural networks (in kernel regime) in fixed dimension. The authors showed that benign overfitting is possible if and only if the learner model has large derivatives. This implies that benign overfitting is not possible for those models with small derivatives (including ReLU NTK). On the other hand, they showed that for certain spiky-smooth activations/kernels, benign overfitting is achievable. Experiments are provided in the paper to verify the results.
Strengths: 1. The paper is clearly written and easy-to-follow. The proof sketch is given to help the readers to understand the proof easier.
2. Understanding benign overfitting phenomenon is an important problem for deep learning. The current paper focuses on the kernel regression setting and connects to neural networks via the neural tangent kernel (NTK) theory
3. The idea of introducing spiky terms in NTK seem to be interesting. The observation in the proof that such spiky term in NTK behaves like regularization term so that the whole solution is now approximating kernel ridge regression also seems to be interesting.
Weaknesses: 1. The current paper focuses on kernel regression and neural networks in the kernel regime. It would be more interesting to go beyond the kernel regime to see if similar results would also hold (e.g., in feature learning regime).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I was wondering what the dependency on dimension $d$ would be in the results such as Theorem 1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitation is discussed in the paper. This is a theoretical work and therefore does not seem to have negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for carefully reading our paper and for providing detailed feedback.
**Remark: The results only cover the kernel regime.**
We would like to point out that this paper is the first to establish benign overfitting with a neural model in the challenging regime of low dimension, and the first to rigorously establish benign overfitting in fixed dimension for kernel regression. This regime is very different from the high-dimensional setting, in which feature-learning neural networks usually excel. For Sobolev function spaces, kernel ridge regression achieves optimal non-parametric convergence rates, so that asymptotically there is no need to propose a feature learning neural method, but it was unclear whether benign overfitting can be achieved. In the updated version, we will show that our method matches the convergence rate of kernel ridge regression (see the global response for more details). This shows that benign overfitting is not only possible with kernels and wide neural networks, but can even be rate-optimal.
We agree that benign overfitting results beyond the kernel regime would also constitute an interesting line of future work, but they are beyond the scope of this paper. We will mention the feature learning regime in the future work section (see the global response). You can find another perspective in our response to Reviewer jpYR.
**Question: What is the dependency on the dimension $d$?**
See the global response.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will keep my score. | Rebuttal 1:
Rebuttal: We want to thank all reviewers for their detailed feedback. The following remarks have been raised multiple times and we will include a discussion in the revised version of our paper:
**Question 1: Can benign overfitting with spiky-smooth kernels achieve optimal rates?**
Since minimum norm interpolation with the spiky-smooth kernel sequence mimics kernel ridge regression with the smooth kernel component, it is easy to see that our estimator achieves the same convergence rates as kernel ridge regression, if the spike bandwidth $\gamma$ converges to zero slightly faster. By choosing the quasi-regularisation $\rho$ as for kernel ridge regression, we can therefore achieve benign overfitting with optimal nonparametric rates in Sobolev spaces (up to a $\log^2(n)$ term) (Fischer and Steinwart, 2020). As discussed in the detailed related work Appendix A, $\rho$ can for example be chosen via cross-validation. We will include this result in the updated version, which resolves the question whether benign overfitting with optimal nonparametric convergence rates is possible with kernel regression in fixed dimension.
**Question 2: What is the dependency on the dimension $d$?**
This question can be posed for our inconsistency results and for our consistency result.
For our inconsistency results, the constant $c$ in the lower bound can depend on the dimension $d$ as well as the specific choice of the kernel $k = k_d$ for different dimensions $d$. For specific sequences of kernels $(k_d)_{d \in \mathbb{N}}$, there are results by Liang and Rakhlin (2020) and Liang et al. (2020) which show a convergence to zero for $d \to \infty$, and a more quantitative but semi-rigorous result by Mallinar et al. (2022). In our case, obtaining an explicit dependency on $d$ would require at least stronger assumptions and very careful reworking of previous works. Specifically, it would require at least
- to consider how the constants of equivalence behave for varying $d$ in assumption (K)
- to fix one definition of the Sobolev norm, since the constants of equivalence between different equivalent Sobolev norm definitions might grow with $d$,
- to track the dependence of the constants on $d$ in our proofs and all previous works, especially Buchholz (2022) and the literature on Sobolev spaces he builds upon.
Now we discuss our consistency result. For continuous covariate distributions with upper bounded Lebesgue density, the typical distance between training points scales as $n^{-1/d}$ (see e.g. Lemma 7 in Buchholz (2022)). The spike bandwidth $\gamma$ has to decay to $0$ faster than this typical distance. For the example of the Laplace kernel, we can choose the spike bandwidth $\gamma_{n,d}=\Omega(n^{-(2+\alpha)/d})$, $\alpha>0$, as previously explained in Remark G.2. Hence, the spikes are indeed allowed to be less sharp with increasing dimension. When $d$ and $n$ have the same order of magnitude, the minimum-norm interpolants of common kernels suffice to achieve benign overfitting - under stronger, favorable distributional assumptions (Liang and Rakhlin, 2020). Irrespective of the dimension $d$, we achieve benign overfitting with estimators in RKHS of arbitrary degrees of smoothness.
**Limitation: Our results only cover the kernel regime.**
Since our analysis only covers the kernel regime, we will add the following sentence in the future work section: "Finite-sample analyses of moderate-width neural networks with feature learning parametrizations and other initializations could enable to understand how to induce a spiky-smooth inductive bias in feature learning neural architectures."
**References:**
S. Buchholz "Kernel interpolation in Sobolev spaces is not consistent in low dimensions", COLT 2022.
Fischer and Steinwart "Sobolev Norm Learning Rates for Regularized Least-Squares", JMLR 2020.
Liang and Rakhlin "Just Interpolate: Kernel “Ridgeless” Regression Can Generalize", Annals of Statistics, 2020.
Liang et al. "On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels", COLT 2020.
Mallinar et al. "Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting", NeurIPS 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper explains that benign overfitting when the dimensionality of data is fixed is possible if one looks for estimators differently from conventional minimal norm. If one allows the estimator to be spiky, benign overfitting can still be possible. These results are extended via NTK to two-layer infinite-width neural networks. The authors showed that a nearly imperceptible sinusoid added to a RELU nonlinearity allows the trained neural network to overfit noisy data without compromising generalization.
Strengths: (Please note that I could not have proofread the submission as well as I would have liked, due to various external obligations.) This paper extends the results on consistency / inconsistency for infinite-dimensional data to the finite dimensional data case. Although the main points - allowing the estimator to go out of its "smooth" way to fit noisy data - could have been argued to be expected, actually seeing this established is still a contribution of remark. The other main point - replacing commonly used non-linearity with one that contains tiny level of sinusoidal perturbation - is not expected by me, that's a very nice way of understanding the non-linearity for neural networks.
Weaknesses: (Please note that I could not have proofread the submission as well as I would have liked, due to various external obligations.) This paper promotes benign overfitting, but it still leaves open whether not allowing overfitting is better or worse than going into benign overfitting, and within the space of benign overfitting how to reach good results, for a given (n,d).
In Section 6.2 I felt that the noise standard deviation of 0.25 and the neural tangent kernel of gamma = 1/5000, these two quantities are somehow linked. It could turn out that claiming that such a network can be trained to overfit without regularization might be a dangerous overstatement.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for carefully reading our paper and for providing detailed feedback.
**Q1: Is regularization better than benign overfitting?**
We would like to emphasize that we do not promote overfitting. Instead this paper is the first to show that benign overfitting of kernels and neural networks is possible in the classical limit $d$ fixed, $n\to\infty$, in which traditional statistics had suggested that training and test error should be balanced via explicit regularization.
Indeed the question of when overfitting to noise can be helpful is an interesting one, but difficult to answer. From a theoretical perspective, when considering convergence rates for Sobolev target functions, kernel ridge regression already achieves optimal rates, as discussed in the detailed related work (Appendix A). In the updated version, we will show that our estimator achieves the same convergence rates if the spike bandwidth $\gamma$ converges to zero slightly faster, achieving benign overfitting with optimal rates in Sobolev spaces. This is easy to see since our estimator already mimics kernel ridge regression. From this perspective, we show that both explicitly regularized as well as interpolating estimators can be optimal and bad generalization is not implied by $0$ training error alone.
From a practical perspective, there are two aspects: For kernel regression, our spiky-smooth kernels mimic ridge regularization and we do not expect them to perform better than kernel ridge regression.
For neural networks, our approach to benign overfitting is not yet practical on realistic data set sizes, more complex architectures, or outside of the kernel regime (as we mention in the future work section), therefore a comparison would be premature.
**Q2: Within the space of benign overfitting, how should one reach good results for given $(n, d)$?**
The quasi-regularization $\rho$ can be chosen as in kernel ridge regression, for example via cross-validation, as we discuss in Appendix A. Since the typical distance between training points scales as $n^{-1/d}$ for continuous random variables in $\mathbb{R}^d$ (see for example Lemma 7 in Buchholz, 2022), the spike bandwidth $\gamma$ can always be chosen as for the most challenging case $d=1$. With increasing dimension it can be chosen to decay slower as $\gamma_{n,d}=\Omega(n^{-(2+\alpha)/d})$, $\alpha>0$, as explained in Remark G.2 (see also our global response). This leaves open some parameters such as layer width and learning rate, but this paper should rather be seen as a first possibility result than proposing a fully practical neural method.
Regarding neural networks (Section 6.2), the value $\gamma = 1/5000$ still determines the width of the spike in the NTK. The spike should be thin enough but it still needs to be possible to approximate it with finite-width neural networks. The optimal width of the spike is not influenced by the noise standard deviation, but the ``regularization'' parameter $\rho$ that corresponds to the height of the spike should be. | null | null | null | null | null | null |
OBJECT 3DIT: Language-guided 3D-aware Image Editing | Accept (poster) | Summary: The paper studies the problem of object-centric image editing. The authors first curate a dataset based on Objaverse by selecting high-quality textured samples, and then simulate+render them on a plane. The objects can be manipulated in 3D and rendered correspondingly, which generates the groundtruth for training learning-based object editing models. The paper further presents a diffusion-based object editing model (3DIT) based on zero-123, where the major difference is the addition of editing prompt conditioning. Results in quantitative and qualitative experiments show 3DIT outperforms baselines based on foundation models.
Strengths: The dataset curation using 3D simulation for the object-centric image editing task makes sense. This task intrinsically requires understanding of the 3D world and 3D-aware image formation process. The proposed dataset is guaranteed to be 3D-correct, and would be useful for research along this direction. The proposed method also achieved great performance compared to the baselines. Besides, the paper is well-written and easy to follow.
Weaknesses: 1. The realism of the generated dataset is still limited:
a) The single directional light (why not multiple lights or Image-based lighting?) which makes the shadow and shading's distribution not diverse/realistic enough.
b) The size range of the object is quite limited (0.8 ratio threshold), in practice, there are a lot of cases where different sized objects are placed nearby (also the true size seems not kept, so chairs and lamps appears to be of similar scale).
The authors indeed show some qualitative results of direct transfer to real data, but no 1) quantitative evaluation of sim2real or 2) comparison to baselines' generalization have been provided.
2. Although the data is generated with 3D, the method follows a 2D design. Such a model does not exhibit a great understanding of 3D objectness -- for example, the identity of the object is sometimes not well-kept during rotation or translation (e.g. the change of coke/headphone texture in Fig 1), or the cast shadow or the shading do not make full sense w.r.t. the whole scene if we look close enough (e.g. first 3 rows of clevr in Fig 1). Having some systematic analysis of the failure cases here would have been very helpful for future research.
Minor questions/comments:
1. Why is the background always black? This seems to be creating domain gap, as usually we don't have pure black background in real life.
2. Wrong number highlight in Table 1, 3DIT(Multitask) - insertion - LPIP, 0.585 is worse than the other two.
3. The multi-task model is much worse in terms of FID (Table 1), but not other metrics. Are there any specific reason for this?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see my questions in the weaknesses. Overall, I think the pros of this paper outweigh the cons -- the curated dataset would be useful for future exploration along this direction and the proposed method can serve as a strong 2D baseline. I believe having sim2real comparison and systematic failure case analysis will further solidify the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: More analysis on failure cases and limitations will be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for providing valuable feedback. We will now respond to the highlighted questions and concerns.
**Realism of dataset**
As per reviewer suggestions, we have improved the realism of our dataset in 3 ways: (i) In line with common practices in professional film and photography, we implemented a 3-point lighting system that automatically aligns itself with the viewpoint of the camera, thereby shading objects in a way that better reveals their true 3D form; (ii) We added real-world environmental lighting into our dataset with 360-degree HDRI captures from both indoor and outdoor scenes under a variety of lighting conditions. Not only does this give scenes in our dataset realistic backgrounds, but the light emanated from these captures is integrated into the ray-tracing process during rendering, so that all aspects of the scene benefit from a more realistic lighting distribution; and (iii) We replaced the ground textures in our dataset with more realistic ones that have normal, roughness and displacement maps. Please see figure B in the dataset for a visualization of the improvements.
We also retrained our models on this more realistic dataset and find that the in-domain performance follows similar trends as reported in the paper, but the models generalize better to out-of-domain data like CLEVR and the real world images. We will update the draft with all recomputed metrics for our new models on this improved dataset but include PSNR here for brevity.
| | PSNR (seen objects) | PSNR (unseen objects) |
|------------------|---------------------|-----------------------|
| Translation | | |
| Baseline | 13.699 | 14.126 |
| 3DIT (1-task) | 14.546 | 14.4 |
| 3DIT (Multitask) | 15.21 | 15.2 |
| Rotation | | |
| Baseline | 13.179 | 12.848 |
| 3DIT (1-task) | 16.828 | 16.293 |
| 3DIT (Multitask) | 16.859 | 16.279 |
| Insertion | | |
| Baseline | 12.297 | 12.542 |
| 3DIT (1-task) | 13.469 | 12.974 |
| 3DIT (Multitask) | 13.63 | 13.088 |
| Removal | | |
| Baseline | 12.494 | 12.123 |
| 3DIT (1-task) | 24.937 | 24.474 |
| 3DIT (Multitask) | 24.98 | 24.661 |
**2D design**
Our goal is to see if it is possible to perform 3D-aware edits while working solely in the pixel space. This hypothesis is motivated by the success of approaches like Zero-123 which generate plausible novel views of a single object given only one image. Importantly, by using a large pretrained image generator, Zero123 outperforms comparable methods that use explicit 3D geometry. Our results further provide evidence to support that performing 3D-aware edits while solely operating in 2D is possible.
**Failure cases analysis**
We conducted an analysis of the model's failure modes and report 4 major categories: incorrect texture, incorrect localization, unintended global modifications and incorrect geometry. Figure C in the attached PDF shows the frequency of each error type among a sample of errors annotated from our test set. Figure D shows a visualization of each error category.
**Human eval**
We also provide quantitative human-evaluation on real images. We find that human evaluators overwhelmingly prefer results from our method over baselines. Among all tasks, insertion is the most challenging for our model.
| Task | Geometric consistency | | | Lighting consistency | | |
|-------------|-----------------------|-------------|-----|----------------------|-------------|-----|
| | Baseline | 3DIT (Ours) | Tie | Baseline | 3DIT (Ours) | Tie |
| Translation | 25% | 60% | 15% | 15% | 70% | 15% |
| Rotation | 10% | 80% | 10% | 10% | 80% | 10% |
| Insertion | 35% | 55% | 10% | 35% | 55% | 10% |
| Removal | 10% | 75% | 15% | 5% | 80% | 15% |
Below we compare the model trained on the new improved dataset and the old dataset. Training on the newer more realistic dataset leads to stronger models.
| Task | Geometric consistency | | | Lighting consistency | | |
|-------------|-----------------------|-----------|-----|----------------------|-----------|-----|
| | Old model | New model | Tie | Old model | New model | Tie |
| Translation | 15% | 60% | 25% | 15% | 65% | 20% |
| Rotation | 30% | 55% | 15% | 15% | 60% | 25% |
| Insertion | 30% | 60% | 10% | 20% | 50% | 30% |
| Removal | 10% | 60% | 30% | 10% | 60% | 30% |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. Most of my concerns are resolved, and I remain on the positive side.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our work and for giving valuable feedback. We are glad you found our work to be a positive contribution. | Summary: This paper constructs a dataset containing 400K examples, which is used for the task of language-guided 3D-aware image editing. This paper also proposes a model, named 3DIT, to solve this task. The model is based on 2D diffusion model, which first goes through the pre-training of text-to-image generation and Zero-1-to-3, and then is fine-tuned on the dataset for the 3D-aware image editing task. The model is evaluated on the proposed dataset and achieves state-of-the-art performance.
Strengths: 1. This paper constructs a large-scale dataset, which is a good contribution. And language-guided 3D-aware image editing is an important task.
2. The paper is well written and easy to understand.
3. The experiments show that the proposed model trained on the proposed dataset have a reasonable ability on the task of language-guided 3D-aware image editing.
Weaknesses: My biggest concern is that the performance of the proposed model is not very good.
1. The GIF results in the supplementary material exhibit incorrect results on the shadow.
2. The lighting and shadow of experimental results on real-data is not realistic. The quality of edited images degrade a lot.
Due to the poor performance, I am not sure if the quality and diversity of the proposed dataset is good enough.
The illumination and background of presented real data is too simple. It would be great to add more challenging test data to see the upper bound of the proposed model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for providing valuable feedback. We will now respond to the highlighted questions and concerns.
**Incorrect shadow behavior in GIF, dataset realism and diversity**
The original dataset at the time of submission used a single directional light source leading to unrealistic lighting in our training dataset. Therefore, learning to model shadows on OBJECT dataset showed limited transfer to CLEVR. As discussed in the common statement, we have improved the realism of the dataset. Please see Figure B for a visualization of the lighting improvements. Specially, with our new 3-point and environmental lighting, the scenes are now lit in a much more realistic manner leading to noticeably better shadow rendering behavior in our models. See the Figures A and I in the pdf for comparison between models trained on old and new datasets.
**Difficulty of real data**
Figure G in the attached PDF shows the model editing more complex real world scenes, as well as some failure cases. For example, when removing the can, we see some noticeable infilling artifacts. Our model often struggles with scenes of higher complexity in the real world. We also conducted an analysis of the model's failure modes and report 4 major categories: incorrect texture, incorrect localization, unintended global modifications and incorrect geometry. Figure C in the attached PDF shows the frequency of each error type among a sample of errors annotated from our test set. Figure D shows a visualization of each error category.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal.
Although the performance is limited, I would keep the positive side, considering that contribution of the dataset.
Please discuss the failure cases in the camera ready version.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our work and for giving valuable feedback. We will add a failure analysis to the final paper. | Summary: The paper formulates a task of 3D aware editing using the language guidance. The task aims to insert, remove, translate or rotate objects in a scene (2D images) by maintaining the details like shadows, 3D consistency of the object, changes in the object sizes due to perspective projections etc. The model is based on Stable diffusion, Zero 1-to-3 method and fine tuning on the given dataset having editing information and text to describe the edit. The paper promises to release the dataset OBJECT derived from Objaverse, which the authors use to train their model on. The results in the teaser figure and others show that the model is able to perform the given edits, while preserving the semantics of the image. For examples, the objects are translated and it respects the perspective projection, shadows and placement on the surface. The authors also claim that the method is generalizable to the real images.
Strengths: 1) The paper is able to show that the manipulations possible with the method can preserve the 3D properties of the scene including localization of the objects, scaling, shadows and consistency of the inpainted regions.
2) The paper compares with the image based baselines, for examples uses SAM to segment the images and translates the objects in the scene. It also compares with the a 3D baseline for rotation using Zero 1-to-3.
3) The paper shows quantitative and qualitative results of their method comparing scores such as PSNR, SSIM, LIPIPS and FID between the original and edited images. The authors also conduct a user study to access the quality of lighting and geometry.
Weaknesses: 1) The method to train on the given dataset is not clear to me. There is no pipeline figure to explain the stages of the training. The first two steps are previous works. The contribution which is in the third step is not explained properly in the paper. Is this fine-tuning stage similar to Zero-1-to-3? How was the editing sequence fed to the network? How is it 3D aware besides the Zero-1-to-3 training? Is the method's 3D consistency (for example in rotation) upper-bounded by Zero-1-to-3?
2) While the images shown in the paper show that some properties are preserved as the editing operations are done, the results in the gifs show some obvious flickering artifacts which do not respect the properties like shadows. This does not go well with the objective of the paper. Besides the problems with Stable Diffusion , where do the problems arise?
3) How is the quality drop if number of sequential operations are done? For example once can perform insertion-> rotation ->translation etc for the same/ different objects in the scene. How does the quality drop compare with the Stable diffusion image editing methods? This is more interesting to me. A 3D aware editing framework should be able to handle multiple sequential edits with consistent results.
4) Another baseline would to use a monocular depth estimation model (eg Zoedepth, Midas) to extract the surface and perform the edits using Zero-1-to-3. This can handle the perspective projection of the objects, and/or even lighting and shadows. Did the authors try similar more strong baselines? How do they compare with the current method?
5) The paper claims to generalize to the real scenes. This is a significant claim and needs to be evaluated. The issue of synthetic and real image domain gap is an active research area. How does the current method solve that in this particular task? Were real images considered in Table 2?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors do not discuss the limitations of their work in detail. Please add a detailed section on where the method fails.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for providing valuable feedback. We will now respond to the highlighted questions and concerns.
**Method clarifications**
Our approach extends zero123 with a CLIP text encoder (the same as the one used in the original StableDiffusion). The model is finetuned starting from the zero123 checkpoint in exactly the same way as zero123. A sample in a minibatch consists of the original image, instruction, and the target image. The denoising network is a diffusion model trained to recover the target image from a noise version of the target image conditioned on the original image and the instruction. The instruction is always a single edit from one of the four supported tasks. We do not train on sequences. Please see Figures E and F in the attached PDF for further clarification.
We clarify that 3D awareness has to do with the task definition which requires models to edit objects in complex scenes as if performing 3D transformations in the real world but while only operating in the pixel space. Zero-123 is a model that takes an image and generates novel viewpoints of the object. In doing so, Zero-123 has developed an implicit understanding of the 3D geometry of objects. However, Zero-123 does not take into account scene lighting and occlusions with other objects in the scene. Therefore, our model trained on our 3D-aware editing benchmark learns more about the geometry, lighting, and surface-contact than Zero123.
While our current method may be bounded by the limitations of Zero123, the benefit of using this approach is that, unlike approaches that involve explicit 3D geometry, our approach can be improved with scale [1]. Extensions of our framework can develop improved datasets with more objects [1], language annotations [2], or more complex environments [3], for example.
**Flickering artifacts**
The reviewer should also note that our approach is primarily geared towards editing images and not videos and hence the frames generated when creating a video visualization of the results are not guaranteed to be temporally consistent. Ensuring this temporal consistency is an active area of research [4] and would require several innovations in video modeling. Current state of the art video models still struggle with temporal consistency despite being explicitly trained for this [5]. Our newest model has improved in its ability to synthesize shadows for out of domain objects. Please see Figure I in the PDF.
**Sequential operations**
Performing a sequence of edits is an interesting idea! We provide an example of performing multiple edits on the same object in the attached PDF. Please see Figure H.
**Additional baseline**
Zero123 is an image-to-image model conditioned on a camera pose. Zero-123 doesn’t produce a 3D model nor is it conditioned on one. Therefore any 3D surface extracted with a depth estimation model would have to be converted to an image before being processed by Zero123. The baseline reported in our paper uses Zero123 to rotate an object extracted from a segmentation model and then harmonizes the resulting edit with StableDiffusion inpainting. However, in the spirit of your suggestion, to test the maximum possible performance of a method involving Zero123, we ran an additional version of this baseline where the edited object is extracted with its ground truth mask before being rotated by Zero123. The results are reported below.
| | Seen | | | | Unseen | | | |
|-----------------|--------|--------|-------|-------|--------|-------|-------|-------|
| | PSNR | SSIM | LPIP | FID | PSNR | SSIM | LPIP | FID |
| Ours multitask | 16.859 | 0.382 | 0.429 | 0.248 | 16.293 | 0.372 | 0.445 | 0.28 |
| Oracle Baseline | 15.613 | 0.3845 | 0.422 | 0.248 | 14.552 | 0.366 | 0.439 | 0.001 |
**Real-world evaluation**
To address this concern, we have conducted a human evaluation of real world images. We compare our latest model to the baselines and also compare our latest model to the original one. Please see the tables below.
| Task | Geometric consistency | | | Lighting consistency | | |
|-------------|-----------------------|-------------|-----|----------------------|-------------|-----|
| | Baseline | 3DIT (Ours) | Tie | Baseline | 3DIT (Ours) | Tie |
| Translation | 25% | 60% | 15% | 15% | 70% | 15% |
| Rotation | 10% | 80% | 10% | 10% | 80% | 10% |
| Insertion | 35% | 55% | 10% | 35% | 55% | 10% |
| Removal | 10% | 75% | 15% | 5% | 80% | 15% |
| Task | Geometric consistency | | | Lighting consistency | | |
|-------------|-----------------------|-----------|-----|----------------------|-----------|-----|
| | Old model | New model | Tie | Old model | New model | Tie |
| Translation | 15% | 60% | 25% | 15% | 65% | 20% |
| Rotation | 30% | 55% | 15% | 15% | 60% | 25% |
| Insertion | 30% | 60% | 10% | 20% | 50% | 30% |
| Removal | 10% | 60% | 30% | 10% | 60% | 30% |
References
[1] Objaverse-XL: A Universe of 10M+ 3D Objects
[2] Scalable 3D Captioning with Pretrained Models
[3] Habitat: A Platform for Embodied AI Research
[4] TokenFlow: Consistent Diffusion Features for Consistent Video Editing
[5] VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation
---
Rebuttal Comment 1.1:
Comment: While the authors did address some of the concerns, I still believe that there are some quality concerns. That being said I think the dataset is a valuable addition and hence I am raising my rating to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our work. Your insightful feedback and experiment suggestions have led us to new findings that have strengthened our work. We greatly appreciate your improved rating. | Summary: The authors propose a large dataset of 3D aware image edits along with editing instructions built on the objaverse dataset. They also introduce a model finetuned on Zero-1-to-3 for 3D aware editing tasks which include object insertion, removal, translation and rotation. Comparisons are provided against state of the art models for each task and performance improvement is demonstrated.
Strengths: 1. **Clarity**: The paper is well written with attention to detail. All the necessary details particularly with regards to the dataset creation have been adequately explained.
2. **Interesting dataset**: The 400k dataset of images along with edit instructions would serve as an interesting training and benchmarking dataset for the task of 3D aware editing.
3. **Quantitative metrics**: A number of qualitative comparisons and user studies are provided to demonstrate the geometric consistency of the edits and lighting consistency.
Weaknesses: 1. **Novelty**: Although the proposed dataset represents an important contribution, the proposed approach relies on zero-1-to-3 and finetuning on a new dataset.
2. **Need for zero-1-to-3**: The approach finetunes a model on top of zero-1-to-3 to incorporate edit instructions. Can the finetuning be done on top of base SD?. Adding an ablation to this effect will be helpful to demonstrate the need for the 3 stage curriculum.
3. **Related work**: Several related work that may provide important context are missing. The authors might find some of the following works relevant and interesting [1,2,3,4]. Although some of these works are pre-prints and do not warrant strict comparisons, incorporating them into the related work section would place the proposed work appropriately w.r.t the landscape of current literature
4. **Changes in the edited image** : There are certain global changes in the edited image, that dilutes some of the claims w.r.t editing. Particularly, for the CLEVR dataset, the provided supplm examples show changes in the color of certain objects upon insertion/ removal
[1] ControlNet
[2] InstructPix2Pix
[3] InstantBooth
[4] GLIGEN
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. For translation and insertion, what are the location? are they provided in pixel space? Since they are 3D aware, does it make more sense to provide edit instruction in camera coordinates? ( In pixel coordinates, how is depth interpreted?)
2. Is it easier to provide control inputs as a spatial map (say as gaussians/ keypoints on a 2D image) rather than as a pixel locations to allow for more local specificaition?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Adequate treatment of limitations have been provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for providing valuable feedback. We will now respond to the highlighted questions and concerns.
**Novelty**
There are 3 main novel contributions in our work:
1. We propose the task of language-guided 3D-aware image editing.
2. Given the challenges involved in creating supervised training data for this task in the real world, we propose a large-scale procedurally generated benchmark as a means for learning and evaluating this new task.
3. We train a model to edit scenes in context and observe real-world transfer.
Identifying a simple approach (adapting Zero123 and finetuning on OBJECT) that works well for a challenging new task may seem obvious in hindsight, but we respectfully argue that this simplicity is a strength and a technical contribution of this work. As per the reviewer’s suggestion, the comparison of model weights presented next, also confirms the value of initializing with Zero123 as compared to finetuning from Stable Diffusion. Finally, it is not obvious how to construct a training dataset that would allow a Zero123-like model to edit real scenes. As the first work to do so, we consider this a significant and novel contribution. We leave more thorough exploration of model design for future research.
**Comparison of initialization**
This is a great suggestion! We provide a study of different initialization schemes comparing model weight initialization using: (i) Stable Diffusion which was trained for text-to-image; (ii) Zero-123 trained on image-to-image novel-view synthesis on Objaverse (current initialization scheme); (iii) Zero-123 trained on a larger Objaverse-XL [1].
In the table below, we find both Zero-123 based initializations to outperform Stable Diffusion demonstrating the benefits of Zero-123’s novel-view synthesis training by helping the model develop a better implicit understanding of 3D geometry of objects. Initializing with Zero-123 trained on the larger Objaverse-XL dataset achieves similar performance as Zero-123 because Objaverse is already a massive scale pretraining dataset and further scaling Zero-123 style novel-view synthesis pretraining has diminishing returns if any. Scaling limits of our 3D-aware editing training are yet to be explored.
| | Seen Objects | | | | Unseen Objects | | | |
|-------------------|--------------|-------|-------|-------|----------------|--------|-------|-------|
| Model | PSNR | SSIM | LPIP | FID | PSNR | SSIM | LPIP | FID |
| Task: Translation | | | | | | | | |
| SD | 14.373 | 0.264 | 0.51 | 0.101 | 14.351 | 0.253 | 0.51 | 0.102 |
| Zero123 | 15.21 | 0.3 | 0.472 | 0.244 | 15.2 | 0.292 | 0.477 | 0.253 |
| XL | 15.121 | 0.294 | 0.477 | 0.252 | 15.052 | 0.286 | 0.478 | 0.239 |
| Task: Rotation | | | | | | | | |
| SD | 15.074 | 0.368 | 0.43 | 0.089 | 14.558 | 0.359 | 0.438 | 0.095 |
| Zero123 | 16.859 | 0.382 | 0.429 | 0.248 | 16.279 | 0.366 | 0.447 | 0.236 |
| XL | 15.433 | 0.381 | 0.42 | 0.241 | 15.008 | 0.3783 | 0.429 | 0.243 |
| Task: Insertion | | | | | | | | |
| SD | 13.22 | 0.253 | 0.57 | 0.108 | 13.131 | 0.255 | 0.572 | 0.1 |
| Zero123 | 13.63 | 0.263 | 0.551 | 0.222 | 13.088 | 0.261 | 0.568 | 0.214 |
| XL | 13.481 | 0.264 | 0.557 | 0.274 | 13.094 | 0.259 | 0.566 | 0.258 |
| Task: Removal | | | | | | | | |
| SD | 23.882 | 0.576 | 0.263 | 0.117 | 23.352 | 0.542 | 0.27 | 0.115 |
| Zero123 | 24.98 | 0.585 | 0.249 | 0.236 | 24.661 | 0.568 | 0.26 | 0.24 |
| XL | 24.775 | 0.585 | 0.255 | 0.247 | 24.83 | 0.568 | 0.253 | 0.215 |
**Related work**
Thank you for these recommendations! We already cited ControlNet, but will add the other three suggestions and adequately discuss them in the related work. At a high-level, a key difference between these works and ours is that our work focuses not on changing the style, texture or attributes of objects in the scene but rather on enabling 3D-like object rearrangement capabilities via 2D image editing.
**Changes in edited image**
Since our method is based on Zero123 and SD, we inherit some of the flaws of these approaches, like creating global changes while targeting local ones.
**Specifying object location**
They are provided in pixel space. Our reason for doing this is to support an intuitive interaction interface (e.g. on a phone) for editing images where a user can simply click on the region of the image they want to be edited. That being said, our modeling framework can easily be extended to incorporate camera coordinates or spatial maps as well if available.
References
[1] Objaverse-XL: A Universe of 10M+ 3D Objects
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: The authors do a great job of addressing most of the pressing concerns. I have additionally gone through the other reviews and agree that the provided dataset has some merits.
However, I am still not entirely convinced about the technical novelty of the approach.
Although performance trends w.r.t to SD initialization vs Zero-123 initialization are insightful and help highlight the need for the additional viewpoint based finetuning step, it is unclear if the performance difference for these task come from the viewpoint awareness injected by zero-123 or from more data that it sees during finetuning. The edits themselves although not multiview consistent provide a method to generate useful paired data. Additionally, as mentioned above the proposed dataset (if released) would make for a good benchmark.
To that end, I would keep the current score and encourages the authors to release the dataset.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review our work and for giving thoughtful comments in response to our rebuttal. One of the primary goals of this paper is to establish a benchmark for this important task. We will definitely be releasing our code, models and dataset publicly.
We would also like to clarify that with both models being finetuned on the same data and task, the only difference is in their initialization. Therefore, the difference in performance must come from viewpoint awareness learned by zero123. | Rebuttal 1:
Rebuttal: # Common statement
We are encouraged by all the positive comments and thank all of the reviewers for their valuable feedback. Reviewers found our model to be “a novel approach to language guided 3D-aware image editing” (Reviewer 1T99), “the manipulations possible with the method can preserve the 3D properties of the scene” (Reviewer yCLA), and that our dataset is “guaranteed to be 3D-correct, and would be useful for research” (Reviewer SoaB), among other positive comments which we are grateful for. In this section, we will address the questions and concerns that were shared in common by multiple reviewers.
**Reviewers eZ2Z and SoaB expressed concerns that the real-world performance of our model may be limited due to the realism of the dataset.**
First, we note that in spite of the less realistic training data, we were surprised by the generalization of our models to real images. This is a significant finding that establishes training on simulated 3D-aware image editing examples as a promising direction for this challenging object-centric editing task.
Second, in line with reviewers suggestions, we have improved the realism of our dataset in 3 ways: (i) As per common practices in professional film and photography, we implemented a 3-point lighting system that automatically aligns itself with the viewpoint of the camera, thereby shading objects in a way that better reveals their true 3D form; (ii) We added real-world environmental lighting into our dataset with 360-degree HDRI captures from both indoor and outdoor scenes under a variety of lighting conditions. Not only does this give scenes in our dataset realistic backgrounds, but the light emanated from these captures is integrated into the ray-tracing process during rendering, so that all aspects of the scene benefit from a more realistic lighting distribution; and (iii) We replaced the ground textures in our dataset with more realistic ones that have normal, roughness and displacement maps. Please see Figures A, B, and D in the attached pdf for examples from the new dataset.
Third, we retrained our models on this more realistic dataset and find that the in-domain performance follows similar trends as reported in the paper, but the models generalize better to out-of-domain data like CLEVR and the real world images. For results, please see a table comparing human evaluation on the old and new model in response to Reviewer yCLA. Please also see figures A, G, H and I for qualitative results. We will update the draft with all recomputed metrics for our new models on this improved dataset. For brevity, we report a table with PSNR metrics which can be found in the response to Reviewer 1T99.
**Reviewers 1T99, Rpi6, yCLA, eZ2Z and SoaB raised concerns about the limitations of our method.** The reviewers correctly identified some of the failure modes of our approach which we address below:
1. **Unintended global changes / artifacts in GIF:** Like other SOTA diffusion based editing / inpainting models, our model may introduce minor artifacts outside the region of interest specially for out-of-distribution images such as CLEVR and real images. These artifacts are often difficult to notice in isolated edited images but become clear when making a sequence of edits and compiling them into a video or GIF as pointed out by Reviewers 1T99, yCLA, Rpi6, and eZ2Z. Techniques for improving temporal consistency in video-editing literature might be applicable, but our current benchmark and method focuses on single atomic edits as an important first step.
2. **Inaccurate lighting and shadows:** The original dataset at the time of submission used a single directional light source leading to less realistic lighting in our training dataset. With the improvements to the training data, the scenes are now lit in a much more realistic manner leading to noticeably better shadow rendering behavior in our models. Please see the table in the response section of Reviewer yCLA for a human evaluated comparison between the new and old models. We show frames of the GIFs from our old and new model for comparison in Figure I.
However, we emphasize that our main contribution is to introduce the challenging and novel task of object-centric 3D-aware image editing and to create a benchmark for training and evaluation. Our benchmark and model will be useful for future research in this area.
## Additional experiment requests
**Initialization Ablation.** In response to Reviewers 1T99 and Rpi6’s suggestions for model ablations, we provide a study of different initialization schemes comparing model weight initialization using: (i) Stable Diffusion which was trained for text-to-image; (ii) Zero-123 trained on image-to-image novel-view synthesis on Objaverse (current initialization scheme); (iii) Zero-123 trained on a larger Objaverse-XL [1].
In the table listed in the response to Reviewer 1T99, we find both Zero-123 based initializations to outperform Stable Diffusion demonstrating the benefits of Zero-123’s novel-view synthesis training by helping the model develop a better implicit understanding of 3D geometry of objects. Initializing with Zero-123 trained on the larger Objaverse-XL dataset achieves similar performance as Zero-123 because Objaverse is already a massive scale pretraining dataset and further scaling Zero-123 style novel-view synthesis pretraining has diminishing returns if any. Scaling limits of our 3D-aware editing training are yet to be explored.
**Human-Evaluation on Real Images.** We also provide quantitative human-evaluation on real images as requested by Reviewers yCLA and SoaB. We find that human evaluators overwhelmingly prefer results from our method over baselines. Among all tasks, insertion is the most challenging for our model. Please see the tables in the section of Reviewer yCLA.
References
[1] Objaverse-XL: A Universe of 10M+ 3D Objects
Pdf: /pdf/e0bae257e08b98950303b4983e8a56605a85404f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The 3DIT model is a language-guided 3D-aware image editing tool that allows for effective object editing while considering scale, viewpoint, lighting, and object occlusions. The model builds upon previous work in scene rearrangement and image generation, and incorporates a diffusion process to render object transformations. The authors conducted human preference evaluations to measure geometric and lighting consistency, and found that 3DIT outperformed relevant baselines in both categories.
One of the key strengths of 3DIT is its ability to add, remove, or edit shadows to maintain consistency with scene lighting. This is achieved through a shadow generation module that takes into account the position and orientation of the light source, as well as the geometry of the objects in the scene. Additionally, 3DIT accounts for object occlusions by using a novel occlusion-aware rendering module that predicts the visibility of each object in the scene.
The authors also introduced a new benchmark dataset called OBJECT, which consists of 3D scenes with multiple objects and associated natural language descriptions. They trained 3DIT on this dataset and found that it generalized well to images in the CLEVR dataset as well as the real world. This demonstrates the robustness and versatility of the model, and suggests that it could be applied to a wide range of real-world scenarios. Overall, by enabling users to edit objects in a natural and intuitive way, 3DIT opens up new possibilities for creative expression and visual communication.
Strengths: - The 3DIT model is a novel approach to language-guided 3D-aware image editing that builds upon previous work in scene rearrangement and image generation. The model incorporates a diffusion process to render object transformations and uses a novel shadow generation module and occlusion-aware rendering module to maintain consistency with scene lighting and object occlusions.
- The authors conducted human preference evaluations to measure geometric and lighting consistency, and found that 3DIT outperformed relevant baselines in both categories. This demonstrates the effectiveness of the model in producing high-quality, visually consistent image edits.
- The authors trained 3DIT on a new benchmark dataset called OBJECT and found that it generalized well to images in the CLEVR dataset as well as the real world. This suggests that the model is robust and versatile, and could be applied to a wide range of real-world scenarios.
- The potential applications for this technology are vast, including virtual and augmented reality, gaming, and e-commerce. The model could be used to create personalized avatars for virtual reality environments, or to generate realistic product images for e-commerce websites. Additionally, the model could be extended to support more complex scenes and interactions, such as object physics and collision detection.
Weaknesses: - The ablation study is relatively weak. It's unknown which component contributes most to the final performance and which is effective.
- The shadow of the box did not follow the rotation action in the shown GIF. And some artifacts are obvious.
- How did the method choose which box to be moved? Or did it need a handcrafted mask as the selection?
- What is the number of samples used for calculating FID? Normally FID is not reliable of the number of samples is small.
- The comparing table did not include the previous method. I believe several important baselines [a] methods are missing.
References:
[a] Editable free-viewpoint video using a layered neural representation
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our submission and for providing valuable feedback. We will now respond to the highlighted questions and concerns.
**Lack of ablation**
We have extensively evaluated our model's capability using multiple metrics across tasks, single/multitask models, and seen/novel objects. In addition, we now include an ablation of model initialization. To test the effect of the pre-trained initialization, we train two additional versions of the multitask model with different initializations. One is initialized with image-conditioned StableDiffusion [1] and the other with Zero123-XL [2].
In the table below, we find both Zero-123 based initializations to outperform Stable Diffusion demonstrating the benefits of Zero-123’s novel-view synthesis training by helping the model develop a better implicit understanding of 3D geometry of objects. Initializing with Zero-123 trained on the larger Objaverse-XL dataset shows diminishing returns with regards to object-level pretraining. However, the scaling limits of our 3D-aware editing training are yet to be explored and would be an interesting topic for future work.
| | Seen Objects | | | | Unseen Objects | | | |
|-------------------|--------------|-------|-------|-------|----------------|--------|-------|-------|
| Model | PSNR | SSIM | LPIP | FID | PSNR | SSIM | LPIP | FID |
| Task: Translation | | | | | | | | |
| SD | 14.373 | 0.264 | 0.51 | 0.101 | 14.351 | 0.253 | 0.51 | 0.102 |
| Zero123 | 15.21 | 0.3 | 0.472 | 0.244 | 15.2 | 0.292 | 0.477 | 0.253 |
| XL | 15.121 | 0.294 | 0.477 | 0.252 | 15.052 | 0.286 | 0.478 | 0.239 |
| Task: Rotation | | | | | | | | |
| SD | 15.074 | 0.368 | 0.43 | 0.089 | 14.558 | 0.359 | 0.438 | 0.095 |
| Zero123 | 16.859 | 0.382 | 0.429 | 0.248 | 16.279 | 0.366 | 0.447 | 0.236 |
| XL | 15.433 | 0.381 | 0.42 | 0.241 | 15.008 | 0.3783 | 0.429 | 0.243 |
| Task: Insertion | | | | | | | | |
| SD | 13.22 | 0.253 | 0.57 | 0.108 | 13.131 | 0.255 | 0.572 | 0.1 |
| Zero123 | 13.63 | 0.263 | 0.551 | 0.222 | 13.088 | 0.261 | 0.568 | 0.214 |
| XL | 13.481 | 0.264 | 0.557 | 0.274 | 13.094 | 0.259 | 0.566 | 0.258 |
| Task: Removal | | | | | | | | |
| SD | 23.882 | 0.576 | 0.263 | 0.117 | 23.352 | 0.542 | 0.27 | 0.115 |
| Zero123 | 24.98 | 0.585 | 0.249 | 0.236 | 24.661 | 0.568 | 0.26 | 0.24 |
| XL | 24.775 | 0.585 | 0.255 | 0.247 | 24.83 | 0.568 | 0.253 | 0.215 |
**Inconsistent shadow in GIF**
As mentioned in the common statement, the original version of the OBJECT dataset used a single directional light source which was very different than lighting found in CLEVR (the dataset used for making the GIFs) and real images. Hence, learning to model lighting and shadows on the OBJECT dataset had limited generalization to out-of-distribution images found in CLEVR. In-line with reviewer suggestions, we improved the dataset with a more realistic 3-point lighting (see the Figures B for examples of the improved dataset) that reduced the sim-to-real gap as well as the domain gap to CLEVR. As can be seen in figure H, the new shadow of the blue cube more accurately moves in response to the rotation.
**Artifacts in the GIF**
Like other SOTA image editing diffusion models, the model often makes small changes outside the region of interest. These artifacts are often difficult to notice in isolated edited images but become clear when making a sequence of edits and compiling them into a video or GIF. Techniques for improving temporal consistency in video-editing literature might be applicable, but our current benchmark and method focuses on single atomic edits as an important first step.
**How did the method choose which box to be moved? **
We do not need to provide a mask for the box because our method is conditioned on a language description of the object. We identify the box with the prompt “a blue box” for the rotation GIF. Please see the supplementary material of our submission for the rest of the prompts in the GIFs.
**Number of samples for FID**
We calculated FID with 1024 samples.
**Additional baseline**
In this work, we proposed a novel problem of 3D-aware image editing. While there aren't existing methods that solve this exact problem, we do compare to relevant baselines that involve chaining off-the-shelf models together. The suggested baseline, from the paper “Editable free-viewpoint video using a layered neural representation”, is interesting, but requires video from 16 different cameras. Our intention is to allow users to edit photos from a single RGB image, making this baseline out of the scope of our problem formulation. However, we will add this paper to the discussion in the related work.
References
[1] High-Resolution Image Synthesis with Latent Diffusion Models
[2] Objaverse-XL: A Universe of 10M+ 3D Objects | null | null | null | null | null | null |
E2PNet: Event to Point Cloud Registration with Spatio-Temporal Representation Learning | Accept (poster) | Summary: The paper presents an method to perform point cloud registration using event camera data.
The method first learns a feature representation (E2PT) from a point cloud of events. This representation is used as input to standard registration networks.
The experimental section shows better accuracy compared to other event-based feature represenations as well as frame-based approaches.
Other experiments also show that the E2PT representations is generic and cna be used for other event vision tasks.
Strengths: The paper addresses a novel problem of using event camera data for 2d-3d registration. They adapt existing registration pipelines taking into account the specificity of the events.
They propose a new learned event-based representation which leads to good experimental results for the registration problem, but also for other event vision tasks (optical flow, classification, image reconstruction).
Weaknesses: The clarity of the paper can be overall improved.
In particular the method section is not always clear and some details are missing.
For example it is not clear how the features Ft , Fs , and Fst are built (Sec. 3.1.1). the authors mention some "multi-layer convolution", but is not clear what is the input to these convolutions, what exactly is the architecture, if tehre is a separate network for Ft , Fs , and Fst or if it is shared. What is the size of Ft , Fs , and Fst etc. Describing the algorithm with more equations and pseudo-code can make the method clearer, since text and figures alone can be ambigous.
Similarly there is no description in the method section of the E2Pnet architecture, which is presented as the second main contribution in the introduction. From the experimental details it seems that standard architecture from previous works [1,11] are used?
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Please describe more precisely the steps of the method as stated above.
Also, please clarify the event sampling strategy: "We follow the FEN [25, 26] principle and acquire 20000 consecutive
events at a time and sample N = 8192 events from them."
Does it mean that out of 20k events, only N are used to build the tensor representation or that multiple consecutive tensor representations are used? why using such a specific value for N: 8192?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: It would be interesting to understand the runtime of this approach, and how does it scale with respect to the event rate (number of events in the point cloud).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the thorough review and valuable suggestions. Due to space constraints, detailed explanations of the methodology are contained in Sec. A.1.1 of the appendix, including the technical details about the LA, STA, and FP modules. Our approach builds upon the 3D point-based architecture (PointNet++ [1]) and has been designed to leverage the unique characteristics of the event spatio-temporal data. For example, we introduce the spatio-temporal separation mechanism, which is explained in Sec. 3.1 of the paper. Below, we provide further details to better address your concern. We will carefully modify the camera-ready paper to make the method description more straightforward.
**Weaknesses (How are the features $F_{t}$, $F_{s}$, and $F_{st}$ built? What exactly is the architecture? )**
$F_{t}$, $F_{s}$, and $F_{st}$ are obtained by the LA module with different parameters. The LA module needs to find the corresponding K nearest neighbors for each feature aggregation center (sampling point) to perform local feature aggregation (LA). Traditional 3D point cloud approaches usually use Euclidean distance to determine nearest neighbors. In this paper, we consider the different physical meaning and distribution characteristics of event data in time and space. Hence, we use different weight parameters for time and space, and the neighborhood of each sampling point is determined according to the weighted distance. The features generated by LA with larger temporal and spatial weights are called $F_{t}$ and $F_{s}$, respectively. The features generated by LA with the same temporal and spatial weights are called $F_{st}$. Please refer to lines 180 to 190 of the main paper for a more details.
The structure of LA is similar to PointNet++ [1]. For each set of spatio-temporal distance weights, after determining the K nearest neighbors of each sampling point through the weighted spatio-temporal distance, we can obtain a feature matrix of size $S\*K\*3$; $S$ represents the number of aggregation centers, $K$ is the number of neighborhood points (64 in our experiments) and $3$ corresponds to the space-time coordinates ($h, w, t$) of the event.
The features are mapped to a high-dimensional space through a 3-layer MLP to obtain features of size $S\*K\*C$, where $C=3,16,32,64$ represents the number of feature channels. Each layer of the MLP is followed by a layer of batch normalization and ReLU activation function. After that, we perform maximum pooling on the $K$ dimension, retain the neighborhood point with the largest feature value in each feature channel, and finally obtain the $S\*64$ dimensional feature.
**Weaknesses (No description in the method section of the E2PNet architecture, standard architecture from previous works used?)**
The main focus of this work is the feature representation learning of the event data. While E2PNet can utilize different mature 2D-3D registration frameworks after obtaining the output from EP2T (line 152 of the paper), we do not propose a novel registration framework. In our experiments we tested the classical 2D-3D registration frameworks DeepI2P[2] and LCD[3]. We will make this more clear in the camera-ready paper and merge the contribution statement of E2PNet with EP2T to avoid ambiguity.
**Questions (Does it mean that out of 20k events, only N are used to build the tensor representation or that multiple consecutive tensor representations are used? Why use such a specific value for N: 8192?)**
As correctly pointed out, we sample N=8192 events from the original consecutive 20k events. Empirically, this strikes a good balance between efficiency and high accuracy.
To further ablate our choice, we have conducted an analysis of the impact of these two hyperparameters (number of FEN and number of sampling points) from three perspectives: accuracy, time efficiency, and memory cost. The results can be found in the newly uploaded rebuttal PDF. Overall, the proposed hyperparameter settings makes E2PNet reasonably efficient (time and memory) yet significantly more accurate than the baselines. In practice, the value of N can be increased (see the result of 16384 in Tab. 1 of the rebuttal PDF) to achieve higher accuracy or decreased (512 in Tab. 1 of the rebuttal PDF) to achieve much higher efficiency at the cost of accuracy (still better than the baseline).
**Limitations (Runtime of this approach and how it scales with respect to the event rate)**
According to your suggestion, we conducted further analysis of runtime in the rebuttal PDF. Experiments demonstrate that our method achieves superior accuracy and acceptable overhead in terms of runtime and memory consumption. In terms of the scaling, the runtime and memory cost increase with the number of sampling points, with the accuracy also consistently improving.
We also identify some recently proposed techniques to optimize the efficiency of PointNet++, such as delayed aggregation [4] (2.2x speed up on PointNet++, using ModelNet40 [5] dataset), Sparse convolution [6] (20x acceleration on VGG13 [7], using N-Cars [8] dataset). Since our EP2T is inspired by PointNet++, a similar improvement can be expected by applying the same techniques, which will be an interesting direction for future work.
>[1] Qi, et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space."
>[2] Li, et al. "DeepI2P: Image-to-point cloud registration via deep classification."
>[3] Pham, et al. "Lcd: Learned cross-domain descriptors for 2d-3d matching."
>[4] Feng, et al. "Mesorasi: Architecture support for point cloud analytics via delayed-aggregation."
>[5] Wu, Zhirong, et al. "3d shapenets: A deep representation for volumetric shapes."
>[6] Messikommer, et al. "Event-based asynchronous sparse convolutional networks."
>[7] Simonyan, et al. "Very deep convolutional networks for large-scale image recognition."
>[8] Sironi, et al. "HATS: Histograms of averaged time surfaces for robust event-based object classification."
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I have no further questions for now
---
Reply to Comment 1.1.1:
Comment: Thanks for your response, we are happy to be able to address your questions! | Summary: This paper proposes a learning-based event-to-point cloud registration method, which encodes event spatio-temporal data into a grid-shaped feature tensor, and propose a framework to construct E2P datasets using existing SLAM datasets. Experiments are conducted on MVSEC-E2P and VECtor-E2P datasets, and state-of-the-art results are achieved on these datasets.
Strengths: 1. The presentation is easy to understand.
2. Experiments are well conducted and convincing.
Weaknesses: 1. Since event-based applications require a high response speed, the efficiency analysis should be given about the comparison of the runtime and memory usage between the proposed method and other state-of-the-art methods.
2. Since the experimental datasets are generated by the authors, did the authors retrain the learning-based models on the datasets in order to compare with these learning-based methods?
3. The comparisons are done to a few methods in the current manuscript, I suggest more state-of-the-art methods should be included for comparison.
4. Minor Typos / Writing
Figure 3: what does O_{E} mean?
Line 211: the definition of f_{SP} is missing.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive comments about our writing and experiments. In the following, we address the reviewer's concerns and back up our responses with additional experiments. We hope that with major concerns like efficiency analysis and training details resolved, the reviewer will consider improving the final ratings.
**Comparison of the proposed method and other state-of-the-art methods w.r.t. runtime and memory usage.**
Thanks for the suggestion, we have uploaded a new PDF analyzing the time and space efficiency of E2PNet. Though slightly slower (110ms *VS.* 55ms inference time) and more memory-intensive (7.98GB *VS.* 2.2GB) compared to direct tensor-based approaches, given the significantly improved accuracy, this overhead is arguably acceptable for the registration task.
Note that the efficiency of the introduced E2PT network can be further improved by incorporating recent techniques, e.g., 2.2x potential speedup from the delayed aggregation [1]. We are committed to investigating this interesting direction to improve our method in the future. We also show in Tab. 1 of the rebuttal PDF that one can reduce the number of center points in EP2T to significantly reduce the speed and memory consumption (512 points result in 65ms inference speed and 2.9GB memory usage) while still achieving better performance than baselines.
We will add this discussion to the camera-ready paper.
**Did the authors retrain the learning-based models?**
Yes. To fairly compare against different methods, we have retrained all learning-based baselines on the proposed datasets. We will clarify this in the camera-ready paper.
**The comparisons are done to a few methods in the current manuscript. I suggest more state-of-the-art methods should be included for comparison.**
Thanks for the suggestion. This work focuses on the feature representation learning of event-to-point cloud registration. To demonstrate the effectiveness of our E2PT representation, we have compared it against hand-crafted features (discretized event volume [2], published at CVPR2019) and advanced methods with both tensor-based representations (Tore [3], published at T-PAMI 2023) and point-based representations (ECSNet [4], published at T-CSVT 2022). All of these methods are state-of-the-art in their respective domains and serve as solid benchmarks for evaluating the performance of our proposed event representation.
To verify the effectiveness of our method under different 2D-3D registration frameworks (used after the feature representation network), we also compared against different representations under representative methods based on both registration (DeepI2P [5], published at CVPR2021) and retrieval (LCD [6], published at AAAI2020). Though there are other potential candidate registration frameworks like P2Net [7] (ICCV2021) and Pump [8] (CVPR2022), they did not release the code and our main focus is designing a better feature representation learning network that can be plugged into different downstream registration frameworks.
**Typo/writing** We appreciate your attention to the details of our work. In Fig. 3, $O_{E}$ refers to the attention feature obtained from the spatio-temporal feature $F_{st}$ at each feature aggregation center through a classic self-attention module ($O_{E}$ has the same dimension as $F_{st}$ ). Our approach involves not only assigning local neighborhood features to each aggregation center through the LA module but also considering the importance of global information. To achieve this, we introduce a self-attention module to enhance the long-distance correlation between the spatio-temporal features of each aggregation center, which allows for better information interaction among all centers.
The $f_{SP} $ in the paper is a typo, and it should be $f_{FP}$, which represents the point-wise spatio-temporal features obtained after feature propagation (FP module).
We will fix/clarify these points in the camera-ready paper and carefully proofread the manuscript once more.
>[1] Feng, et al. "Mesorasi: Architecture support for point cloud analytics via delayed-aggregation."
>[2] Zhu, et al. "Unsupervised event-based learning of optical flow, depth, and egomotion."
>[3] Baldwin, et al. "Time-ordered recent event (TORE) volumes for event cameras."
>[4] Chen, et al. "ECSNet: Spatio-Temporal Feature Learning for Event Camera."
>[5] Li, et al. "DeepI2P: Image-to-point cloud registration via deep classification."
>[6] Pham, et al. "Lcd: Learned cross-domain descriptors for 2d-3d matching."
>[7] Wang, et al. "P2-net: Joint description and detection of local features for pixel and point matching."
>[8] Revaud, et al. "Pump: Pyramidal and uniqueness matching priors for unsupervised learning of local descriptors."
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's feedback. Based on Table 2 in the Author Response Appendix, instead, I think the increases in runtime and memory consumption of the proposed method do not result in significant performance gains.
---
Reply to Comment 1.1.1:
Title: The significance of performance gain
Comment: Thanks for the response. In terms of the performance gain, we kindly argue that **the performance gain is significant**.
1) In Tab. 2 of the response appendix, E2PNet has reduced the average error of LCD (Grayscale Image) relatively by **39%** (1.35m to 0.82m) for translation and **43%** (6.336 degree to 3.606 degree) for rotation. Comparing to LCD + other features, E2PNet also **at least** provides a relative error reduction by **35%** for translation (vs 1.263m ECSNet) and **27%** (vs 4.968 degree of E-statistic) for rotation. All of these improvements are non-trivial and significant. Note that the performance difference between other baselines are **much smaller** than the improvement that E2PNet provides.
2) In Tab. 1 of the response appendix, we also show that E2PNet can be tuned to trade accuracy with efficiency. Specifically, by setting EP2T sampling points to 512 (rather than 8192 by default), we have a **similar runtime and memory consumption** as the baselines, yet still being significantly more accurate than **all** baselines. E.g., comparing to LCD (Grayscale Image), we still have **40%** relative reduction for rotation error and **19%** translation error improvement. This shows the effectiveness of E2PNet under various speed and memory constraints.
3) E2PNet also has **a much lower registration failure rate** comparing to the baselines. For example, on the same dataset as in the rebuttal appendix, we compute the ratio of scenes that LCD (Grayscale Image) and E2PNet can register with translation error of <1m and rotation error of <1 degree. The success rate of E2PNet is 59% yet LCD only has 32%, which is **nearly half of the success rate of E2PNet**. We will add this result into the camera ready.
With all these results, we kindly argue that **the performance improvement of E2PNet is significant, and the accuracy improvement does not necessarily comes with overheads**.
Considering this result and the fact that we have addressed other questions/concerns, we kindly ask the reviewer to re-consider the ratings. We are happy to address further concerns/questions if the reviewer has any. | Summary: This paper proposed a Event-Points-to-Tensor (EP2T) network, which treats event data as spatio-temporal point clouds, to process event signals without losing the spatiotemporal information of event signals (especially temporal information, compared with other voxel grid-based methods). In terms of experiments, this work demonstrates the effectiveness of the EP2T network by using event-based point cloud registration as an example, resulting in the development of E2PNet. Furthermore, the authors have gone a step further and tested the generalization ability of the EP2T network in tasks such as optical flow estimation, image reconstruction, and object recognition, obtaining promising results.
Strengths: The proposed Event-Points-to-Tensor (EP2T) network in this paper takes a different approach compared to most existing event signal processing models. This method treats event signals as three-dimensional spatio-temporal point clouds and employs operations such as Local Aggregation (LA), Spatio-temporal Separated Attention (STA), and Feature Propagation (FP) for extracting and preprocessing spatio-temporal features of events. Finally, tensorization is performed to obtain event feature representations compatible with traditional visual models. Compared to previous models that always tensorize event signals before feeding them into the model, this method can fully utilize the discrete and sparse characteristics of events. It effectively extracts spatiotemporal features from events, especially rich temporal domain information. Additionally, the LA operation separates the extraction of features in the temporal and spatial domains, enabling feature extraction and aggregation on different dimensions/domains, considering the distinct physical meanings of the three dimensions in the event signal "point cloud".
Weaknesses: Although the proposed EP2T network in this paper achieves sparse processing of event signals in the first half, it still requires tensorization of the events in the end. In practice, such an operation can significantly degrade the temporal precision of events. While the 2D-3D Registration task proposed in this paper may not aim for high temporal resolution, it is crucial for tasks that are sensitive to event precision, such as high-frame-rate video reconstruction and low-latency object tracking. Furthermore, although the idea of using point cloud networks for sparse processing of event signals in this paper is innovative, the intuitive motivation behind this approach is not clearly presented in the writing. For instance, compared to the operation of tensorization followed by feature extraction, it is not discussed in the paper which events EP2T can handle better and what advantages EP2T possesses. This discussion is lacking in the paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The authors are encouraged to respond to the discussion regarding the intuitive motivation behind EP2T and provide an explanation of whether this operation takes into account the mathematical/physical characteristics of events better or offers better feature extraction advantages compared to traditional dense CNN-based model structures, or if there are other considerations involved. Furthermore, the reviewers request additional targeted experiments to validate the proposed intuitive advantages.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The author points out that the problem of large memory consumption is an important future research direction, which is similar limitation to other point-based models. If this problem can be solved, it will indeed help to improve the practicability of the model. Furthermore, there is a potential limitation that the authors did not mention in the paper, namely, that E2PNet still relies on tensorization to convert event point clouds into dense grid representations, which are then fed into downstream models designed for traditional vision tasks. It would be intriguing if future work could bypass this step and directly output target results using discrete/sparse event signals.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the value of this work and providing an in-depth review. We provide the responses to the questions/concerns below.
**The motivation, advantage, and limitation of EP2T+tensorization-based approaches** First of all, there is no existing event to point cloud registration framework. However, several 2D-3D (tensor-based, e.g., LCD [1] and DeepI2P [2] in our experiments) and 3D-3D (point-based, e.g., GeoTransformer [3]) frameworks exist for other modalities. Due to the direct use of spatial geometry to perform registration, 3D-3D frameworks are not easily applicable to event data since the event signal does not have 3D spatial information. 2D-3D registration frameworks can be directly applied to event data but most temporal information is lost. Our key idea is to design a feature representation network that can effectively extract the spatio-temporal feature before applied to different registration frameworks, whether tensor-based or point-based. In Tab. 1 of the main paper, we have compared the performance of E2PT when applied to both tensor and point-based methods. As described in line 260 of the main paper, we have modified LCD [1] and DeepI2P [2] so that we directly convert point-based features into a global feature without using any grid-based tensorization. We agree that designing a better fully point-based registration framework for event data is an interesting research direction. However, we do not see an obvious advantage of pure point-based methods over EP2T+tensor-based methods. Hence, based on the empirical accuracy, we choose to construct our E2PNet using E2PT+tensor-based methods. We will state this motivation more clearly in the camera-ready paper.
In terms of the advantage, focusing on the spatio-temporal feature learning makes it possible for our method to utilize different future registration frameworks, which can come from outside the field of event data processing. Since tensor-based methods are efficient, combining EP2T with tensor-based methods also provides reasonable speed and memory efficiency along with high registration accuracy.
**Limitations on memory consumption** We appreciate your feedback on the limitation of memory consumption. To further analyze and address this limitation, we have uploaded a PDF that includes a detailed analysis of memory usage. As shown in Tab. 2 of the PDF, though E2PNet exhibits slightly lower speed (110.6ms *VS.* 55ms) and larger memory cost (7.89GB *VS.* 2.2GB), this overhead is arguably acceptable compared to the significantly improved accuracy (translation error decreased from 1.297m to 0.821m, rotation error decreased from 4.97 ° to 3.61°).
Reducing memory consumption is indeed an important direction for future work. The major contributor to the higher memory usage is the EP2T network designed based on PointNet++ [4], where the point-wise distance calculation operation leads to memory growth that scales with the number of points, resulting in a quadratic consumption increase. There are successful approaches that can significantly improve the space efficiency of point cloud frameworks, such as PAT [5] (2x memory reduction on PointNet++, using ModelNet40 [6] dataset) and PVCNN [7] (3x memory reduction on point-based models, using ModelNet40 dataset). Most of these methods can be readily integrated into EP2T, offering the potential to enhance (memory) efficiency without compromising performance. Meanwhile, for applications that require high inference speed, we can reduce the number of EP2T sampling points (Tab. 1 of the rebuttal PDF) to achieve a significant memory reduction (2.99GB with 512 sampling points) while still performing better than the baselines. We are dedicated to further optimize the memory consumption of EP2T, and your feedback has reinforced our commitment.
>[1] Pham, et al. "Lcd: Learned cross-domain descriptors for 2d-3d matching."
>[2] Li, et al. "DeepI2P: Image-to-point cloud registration via deep classification."
>[3] Zheng, et al. "GeoTransformer: Fast and Robust Point Cloud Registration With Geometric Transformer."
>[4] Qi, et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space."
>[5] Yang , et al. "Modeling point clouds with self-attention and gumbel subset sampling."
>[6] Wu, et al. "3d shapenets: A deep representation for volumetric shapes."
>[7] Liu, et al. "Point-voxel cnn for efficient 3d deep learning."
---
Rebuttal 2:
Comment: Hi, we understand that you are very busy. We just want to kindly remind that the discussion phase will pass soon, and we want to make sure that we have addressed all your concerns. Please leave us a comment if you still have further questions/concerns, we will try our best to address them. If we cannot response later due to the discussion deadline, we will address them properly in the camera ready. Looking forward to hearing from you!
---
Rebuttal 3:
Comment: The rebuttal has addressed my comments. I would like to keep my rating unchanged. | Summary: In this paper, the authors proposed the first learning-based work that can handle event-to-point cloud registration (E2P). More specifically, a novel Event-Points-to-Tensor (EP2T) network is proposed to encode the data from the event camera into features tensors in the form of a 2D grid. The temporal patch aggregation and spatial patch aggregation combined with spatio-temporal kernel are applied to obtain the global feature. Then based on the output gridded tensor, a standard 2d-3d feature based algorithm is applied to obtain the structure and motion of the final 3D reconstruction scene.
The experimental results based on the two representative datasets demonstrate the performance, mainly in terms of accuracy, of the proposed E2PNet on the task of event to point cloud registration.
Strengths: The motivation of this paper is well designed, that is introducing the first learning-based architecture to handle the sparse reconstruction based on event cameras. More specifically, the spatial and temporal attention mechanism together with the feature propagation module to obtain the final output gridded tensor. The design of the EP2T network is straightforward, elegant and easy to follow.
In addition to its innovative approach, the paper stands out for its well-structured writing style that includes a clear and concise method statement. The authors have taken great care in presenting their research in a manner that is easily understandable to the readers. Furthermore, the inclusion of visual representations helps to further elucidate the concepts and techniques discussed in the paper.
Moreover, the performance shown in the statistical experiments especially in Table 1 demonstrate the superior performance of the proposed approach over the previous state-of-the-art approaches.
Weaknesses: My major concerns lie in the following two aspects.
First, the statistical experiments in Table 1-3 only demonstrate the accuracy in terms of camera position and direction, while the efficiency especially the time and memory efficiency is missed.
Second, the limitations or failure case of the proposed approach need to be discussed. I am wondering the performance of the proposed approach on more complex and even comparatively large-scale environment.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the "Weakness" section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: See the "Weakness" section. Furthermore, the final score assigned to the study is not solely determined by the discussion among peer reviewers. If the authors can solve my main concerns, I would like to raise the score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thanks for your positive comments about the novelty, writing, and experiments of this work. Please see the following responses to your concerns.
**Time and memory efficiency** We have uploaded a new PDF containing an analysis of the time and memory efficiency of E2PNet. As shown in Tab. 2 of the new PDF, E2PNet is slightly slower (110.6ms *VS.* 55ms) and uses more memory (7.89GB *VS.* 2.2GB) than baseline methods. However, the overhead introduced by EP2T is arguably small given the significantly improved accuracy (translation error decreased from 1.297m to 0.821m, rotation error decreased from 4.97 ° to 3.61°). Note that further efficiency improvements are possible by 1) combining EP2T with more efficient tensorized representation methods, 2) tuning the number of spatial-temporal sampling points for efficiency-accuracy trade-off (Tab. 1 and 2 of the newly added PDF, show that when reducing the number of sampling points to 512, E2PNet runs at a similar speed (65.5ms) and memory cost (2.99GB) as other baselines, yet still has higher accuracy), and 3) optimization techniques like delayed aggregation [1] (2.2x speed up on PointNet++ [2], using ModelNet40 [3] dataset), Sparse convolution [4] (20x acceleration on VGG13 [5], using N-Cars [6] dataset). We will add this analysis and discussion into the camera ready following your suggestion.
**Limitations or failure cases** As discussed in lines 325 to 328 of the paper and the previous response, the limitations of the proposed method lie mainly in the efficiency, which is a common problem in 3D point-based learning architectures. We will state this limitation more clearly by adding the efficiency analysis results to the camera-ready paper and provide further discussions as mentioned in the previous response.
**Complex or Large-scale Environment** Exploring datasets with a more complex structure or a larger scale is indeed a significant problem. Since the registration task requires accurate ground-truth poses for supervision, which (at the moment) is only available in existing indoor data, we evaluate algorithms on indoor scenes in this work. However, we tried our best to involve large-scale and challenging scenes in the experiments (Tab. 1 of the main paper). The VECtor [7] dataset contains indoor scenes with corridors that have long edges of more than $60$m. We use different scenes in the training and testing data to better evaluate the generalization capability of different algorithms. We will discuss this limitation in the paper but leave the construction of better benchmarks for future work.
>[1] Feng, et al. "Mesorasi: Architecture support for point cloud analytics via delayed-aggregation."
>[2] Qi, et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space."
>[3] Wuc "3d shapenets: A deep representation for volumetric shapes."
>[4] Messikommer, et al. "Event-based asynchronous sparse convolutional networks."
>[5] Simonyan, et al. "Very deep convolutional networks for large-scale image recognition."
>[6] Sironi, et al. "HATS: Histograms of averaged time surfaces for robust event-based object classification."
>[7] Gao L, et al. "Vector: A versatile event-centric benchmark for multi-sensor slam."
---
Rebuttal 2:
Comment: Dear reviewer, thanks for your hard work. We understand that you might be very busy at the current moment. A kind reminder that the discussion phase will pass soon. We would like to ask whether our responses have addressed your previous questions/concerns? We are happy to have further discussions if you still have other comments. If we cannot response later due to the discussion deadline, we will address them properly in the camera ready. Looking forward to your reply! | Rebuttal 1:
Rebuttal: We thank all reviewers for their positive comments about the novelty (R1, R2, R4), significance (R1, R2, R4), writing quality (R1, R2, R3), and experiments (R1, R2, R3, R4) of this work.
A common question/concern was the efficiency of the proposed method. We have conducted several experiments and provide a detailed analysis in the uploaded rebuttal PDF. The results show that our method is reasonably efficient while achieving a much higher accuracy compared to the baselines. We also identify and discuss different approaches to further improve the efficiency. Please refer to the individual responses for further details and also responses to other specific concerns. We appreciate all reviewers' feedback and will incorporate it in the final manuscript.
Pdf: /pdf/6d6ca0aca31c828e98155972cfb503b48308fb9c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Finite-Time Analysis of Whittle Index based Q-Learning for Restless Multi-Armed Bandits with Neural Network Function Approximation | Accept (poster) | Summary: This paper proposes a neural network approach to learning the Whittle index policy. In addition, the paper gives a finite-time analysis for the algorithm, and shows that the algorithm indeed learns the Whittle index values for restless bandits.
Strengths: The paper’s sections are well-written and the technical claims of the paper are sound. In addition, the paper provides sufficient analysis of the two-timescale stochastic approximation used here (to learn the Whittle index values and the action-value function for each arm). Also, the paper offers finite time performance bounds in the neural network setting which is a largely unstudied area.
Weaknesses: While the paper is well-written, I believe it’s better if less remarks/lemmas blocks are used and if they are written as paragraphs. Other than that, the technical details appear correct to the best of my knowledge.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: Question: the neural network is using ReLU activation functions. Would the finite-time analysis be applicable to other activation functions such as tanh?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations are addressed by the authors in the paper in regards to the finite-time analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:
**Weakness #1:** While the paper is well-written, I believe it’s better if less remarks/lemmas blocks are used and if they are written as paragraphs. Other than that, the technical details appear correct to the best of my knowledge.
**Our Response:** We thank this reviewer for this suggestion. We will modify those remarks and lemmas in the camera-ready version per your suggestion.
**Question #1:** The neural network is using ReLU activation functions. Would the finite-time analysis be applicable to other activation functions such as tanh?
**Our Response:** We thank this reviewer for this insightful comment. The finite-time analysis of Deep Q-Networks (DQN) [13, 22, 58] and references therein focuses on the ReLU (Rectified Linear Unit) activation function, as it has certain properties that make the analysis tractable. ReLU is piecewise linear and non-saturating, which can simplify the mathematical analysis.
Applying the same analysis to other activation functions like the hyperbolic tangent (tanh) could be more complex. The tanh function is smooth and saturating, meaning that it can squash its input into a small range. This can lead to different behaviors during training, such as vanishing gradients, which might not be captured by an analysis designed for ReLU. It is usually used in classification between two classes and rarely used in deep Q-learning.
To the best of our knowledge, the research for other activation functions on DQN is quite limited, which is an open research problem at current stage.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my question; I will keep my score at 7 for the paper.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your acknowledgement and keeping the positive rating of our paper. Much appreciated! | Summary: This paper presents a neural Q learning method to compute the Whittle indices in restless multi-armed bandit problems. The paper provides an algorithm using two-timescale stochastic approximation (2TSA) to update the parameters in the neural networks and the Whittle indices jointly with different learning rates. The authors also show that the 2TSA method guarantees convergence to the optimal/approximately optimal solution of Whittle indices. One of the major contributions is the breakthrough of not using projection step in the 2TSA algorithm. The algorithm proposed by the authors doesn’t require the functional class to contain the optimal functional approximator. The corresponding approximation guarantee also shows an additional term dependent on the distance of the true optimal to the span of the functional bases used in neural networks. Lastly, the authors provide experiments to show the convergence of the proposed algorithm and empirically verify the assumptions made in the paper.
Strengths: I like the idea of projecting the neural networks using ReLU to linear functions to enable downstream analysis. The paper also generalizes the theoretical analysis in previous work to quantify the impact of approximation error in the functional approximator class. I didn’t go through the appendix, but the proof sketch is clear to me. Overall, the paper is nicely written with new theoretical contribution compared to the previous work. It would be great if the authors can further emphasize and summarize the contributions in the theoretical analysis that are incurred due to not using projection steps. The convergence rate also matches the previous work using projection steps.
Weaknesses: The comparison to the previous algorithm is not clear. I didn’t fully understand why the previous algorithm requires a project step to force the parameters in a bounded set and why your algorithm doesn’t. I believe it is due to the analysis in the convergence guarantee where unbounded parameters can lead to useless bound or divergence. Could you please clarify and emphasis this in the paper more to highlight your contribution more clearly? Especially this seems to be the major contribution of the paper. It deserves a larger portion of the paper to clarify it.
Please also clarify why the definition of Whittle index is different from Whittle et al. and most of the literature. Please see below for more details. I am worried that the different definition can impact the convergence analysis (especially the linearity and Lipschitzness in the proof). Please either justify the use of your definition and any references showing this different definition is valid, or show us that the analysis is not impacted by the definition.
These two are my major concerns of the paper. Please clarify them and I am happy to update my score based on the response.
[Answered by the authors during the rebuttal]
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **Question 1**:
Equation 3: the Whittle index defined by Whittle is defined as the Lagrangian multiplier such that the Q values of action 0 and 1 are identical, i.e., whittle index $\coloneqq \inf_\lambda \{ Q_n(s,1;\lambda) = Q_n(s,0;\lambda) \}$, where the Q functions are functions of lambda (Lagrangian multiplier) and thus parameterized by lambda. However, your definition in Equation 3 is defined as the difference between Q values, i.e., whittle index $\coloneqq Q_n(s,0) – Q_n(s,1)$, where the Lagrangian multiplier for defining the Q values is also not specified. To my knowledge, this is not equivalent to the original Whittle index definition. The original Whittle index is defined based on the solution to Lagrangian relaxation, but yours doesn’t have this property. Please clarify/justify why you can define Whittle index in this way or provide any references supporting your claim. And please also clarify how this definition impacts you downstream analysis.
I have also checked your references in [3,23] are also aligned with the original Whittle index definition (Whittle index defined such that Q(s,0) = Q(s,1)). Please see Equation (4) in [23] and Equation (11) in [3].
Equation (13) will also be impacted by the definition of Whittle index. So does the analysis in Equation (21) and Lemma 2, which will no longer be linear.
References:
[3] Konstantin E Avrachenkov and Vivek S Borkar. Whittle index based q-learning for restless bandits with average reward. Automatica, 139:110186, 2022.
[23] Jing Fu, Yoni Nazarathy, Sarat Moka, and Peter G Taylor. Towards q-learning the whittle index for restless bandits. In 2019 Australian & New Zealand Control Conference (ANZCC), pages 249–254. IEEE, 2019.
---
**Question 2**:
Do other choices of step size sequence $\alpha$ and $\eta$ work as well?
**Feedback 3**:
Equation 18 is the linearized version of Equation 6 where the coefficient associated to the linear term is defined by the initial point $\theta_0 = [w_{r,0}]$. The definition is correct but slightly unclear.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - Different definition of Whittle index: if this is true, then the analysis might be restricted to the definition shown in the paper.
- Linearization: the authors need to linearize the Lyapunov function to enable the theoretical analysis. I believe this is the reason why the authors focus on 2-layer neural networks (only input and output and a ReLU activation function). This can limit the functional approximator class to such neural networks only. For more complex NN structures, the same linearization trick and analysis don't apply anymore.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and constructive comments. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:
**Weakness \#1:** ... project step...
**Our Response:** Thank you for this insightful comment. The projection step in reinforcement learning originates from [6], which provided the first finite-time performance of TD learning with linear function approximation. Due to the Markovian observation noise, the dependent nature of the data introduces a substantial technical challenge: the algorithm’s updates are not only noisy, but can be severely biased. Hence, [6] proposed a variant of TD that projects the iterated parameters onto a norm ball. This projection step imposes a uniform bound on the noise of gradient updates across time, which is needed for tractability. Later on, similar techniques have been extended to Q-leaning with linear function approximation [62, 63] and neural network function approximation [13, 22, 58]. Note that all these works **target the convergence of Q function values**, where the bounded gradient plays a key role.
To remove the projection step under the Markovian observations, [47] is the first work that treats the TD-learning with linear function approximation as a linear stochastic approximation with Markovian noise and shows **the convergence of the parameters in terms of Lyapunov stability theory for linear ODEs by designing a suitable Lyapunov function.** Since the stochastic approximation-based technique tracks the drift of parameters rather than the Q function values, there is no need to track the gradient for parameter update. Instead, we may leverage other properties such as Lipshcitz continuity of the approximate linear functions, which we proved as in Lemmas 1-3 (lines 268-274). Later on, [15] extends the result to Q-learning with linear function approximation.
Till now, the finite-time performance for Q-learning with neural network function approximation is unexplored, let alone our proposed Q-learning-based Whittle index with neural network function approximation (Neural-Q-Whittle). Our goal is to provide the first-ever finite-time convergence of Neural-Q-Whittle without the additional projection step by treating it as a two-timescale stochastic approximation under Markovian observations, which tracks the drift of parameters rather than Q function values.
**Weakness #2:** ...whittle index definition...
**Our Response:** We thank the reviewer for this insightful comment and sharp observation. We believe this is a misunderstanding here, mainly due to our unclear statements. We are sorry for this and will make it clear in the camera-ready version. Below, we provide some clarifications.
In our paper, the Whittle index is still defined in the same manner as in conventional works [3, 23], which is the Lagrangian multiplier such that the Q values of action $0$ and $1$ are identical. With that being said, we have the following equation for action being $0$ and $1$,
$$
Q_n(s,0)+{\beta}=r_n(s,0)+\lambda+\sum_{s^\prime}p_n(s^\prime |s,0)V_n(s^\prime),
$$
$$
Q_n(s,1)+{\beta}=r_n(s,1)+\sum_{s^\prime}p_n(s^\prime |s,1)V_n(s^\prime).
$$
Hence, we have
$
Q_n(s,0)-Q_n(s,1) =r_n(s,0)+\lambda+\sum_{s^\prime}p_n(s^\prime |s,0)V_n(s^\prime)-r_n(s,1)-\sum_{s^\prime}p_n(s^\prime |s,1)V_n(s^\prime), $
which means that $\lambda^*\_n(s)$ is the value to make $Q\_n(s,0)-Q\_n(s,1)=0$.
This leads to the definition of Whittle index in Equation (3), i.e.,
$$
\lambda\_n^*(s) = r\_n(s,1)+\sum_{s^\prime}p_n(s^\prime |s,1)V\_n(s^\prime)-r_n(s,0)-\sum_{s^\prime}p_n(s^\prime |s,0)V_n(s^\prime).
$$
This is exactly the same as Equation (4) in [23] and Equation (11) in [3]. Note that in current Eq. (3), the $Q_n(s,0)-Q_n(s,1)$ is redundant and should be removed. We thank the reviewer again for this comment and we will make this part clear in the camera-ready version.
**Your Question #2:** ... step size...
**Our Response:** Yes. As long as the stepsize sequences $\alpha$ and $\beta$ satisfy the conditions in line 119 and line 121, an almost-sure convergence for learning Whittle index by Q-learning has been characterized in [3]. However, in this paper, we aim to provide the first-ever finite-time convergence rate by carefully designing the stepsize sequences as in Theorem 1 (line 212), which achieves the best-known convergence speed $\mathcal{O}(1/k^{2/3})$ as the general linear and nonlinear 2TSA with i.i.d noise [19, 21].
**Limitation #2:** ... 2-layer NN...
**Our Response:** Thank you for this insightful comment. Unfortunately, we are afraid that we cannot agree with the reviewer's argument. The local linearization is a common trick to tackle the non-convexity of the original neural network function in finite-time analysis [13, 22, 58]. The major difference is that they characterize the convergence of Q function values while we track the drift of parameters. That is the fundamental reason why they need a projection step for parameter update while we do not require it. The local linearization technique can indeed be extended to multi-layer neural network function as in [58].
The reason why we consider a two-layer NN is that the two-layer NN serves as the basis of multi-layer NN and it has strong representation power as the size $m$ goes large [13, 22]. The theoretical results of two-layer NN provide the fundamentals to characterize the result of multi-layer NN, just as what has been done in [13, 22]. In the current manuscript, our focus is to provide a fundamental theoretical analysis of a two-timescale Q-learning-based Whittle index with neural network function approximation, which is already an open question and quite challenging, for which we make clear contributions as we summarize in the introduction. We thank the reviewer for pointing out this question, and extending the current framework with multiple layers is our future research.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I appreciate the authors for their detailed response, especially the response to my Question #2. It addresses my concern about the definition of Whittle index and I will update the rating. Thank you!
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your acknowledgement and raising the rating of our paper. Much appreciated! | Summary: This paper investigates the finite-time analysis of the Whittle index-based Q-learning policy for the RMAB problem under neural function approximation. The authors formulate the algorithm as a nonlinear two-time-scale stochastic approximation problem and present a convergence rate of $K^{2/3}$.
Strengths: 1. The Neural-Q-Whittle algorithm eliminates the projection step.
2. The paper provides finite-time analysis.
3. Simulation results are included to verify the convergence performance.
Weaknesses: 1. It is unclear whether the approximated Q-functions converge or not.
2. The errors diminish as $m\rightarrow \infty$, indicating the need for overparameterization.
3. The simulation setting is too simple to sufficiently demonstrate the advantages of neural approximation. Consideration of a larger state and action space is warranted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Since two-time-scale stochastic approximation has been extensively studied recently, could the authors clarify the novel main technical contributions of this paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: More limitations of the approach should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and constructive comments. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:
**Weakness #1:** It is unclear whether the approximated Q-functions converge or not.
**Our Response:** Thank you for this question. Yes, the approximated Q-functions converge. Need to mention that due to the two-timescale nature for the updates of two coupled parameters (i.e., Q-function values and Whittle indices) in our proposed Neural-Q-Whittle, we focus on the convergence rate of these parameters rather than the convergence rate of approximated Q-functions as in [13, 22, 58]. Our key technique is to view Neural-Q-Whittle as a two-timescale stochastic approximation (2TSA) for finding the solution of suitable nonlinear equations. **Since we have theoretically shown that the parameters can converge to its unique optimal value in Theorem 2 when the neural network function $f$ is linearized as $f_0$, the convergence of parameters in the original neural network function $f$ is established by adding the error between $f$ and $f_0$ introduced by the local linearaiztaion as in Theorem 1. As long as $m\rightarrow\infty$, the learned parameters converge to the true optimal parameters with the convergence speed $\mathcal{O}(1/k^{2/3})$.**
**Weakness #2:** ...$m\rightarrow \infty$ ...
**Our Response:** Yes, that is true. The overparameterization is important in characterizing the convergence of Q-learning with neural network function approximation as in [13, 22, 58]. For a small $m$ value, there will always be a non-diminishing error in the convergence bound.
**Weakness #3:** ...larger state and action space is warranted...
**Our Response:** Thank you for your suggestion. Since we are learning the Whittle index for restless multi-armed bandits (RMAB), the action space is two, i.e., 0 and 1 in the RMAB literature. In addition, the true Whittle index for the general RMAB problem is hard to be solved and may not have a closed-form solution. Hence, we consider a special queueing scenario of the RMAB problem where the state of each arm evolves as a controlled birth and death process as [M. Larrnaaga, U. Ayesta and I. M. Verloop, Dynamic Control of Birth-and-Death Restless Bandits: Application to Resource-Allocation Problems]. Per the reviewer's suggestion, we consider a larger state space, i.e., an arm with 51 states, which range from 0 to 50, representing the queue length. We randomly select a state and compare the learned Whittle index by our Neural-Q-Whittle with all other benchmarks in this paper. For these experimental results, **Please kindly refer to the pdf in the General Response.** Again, it is clear that we make the same observations as we have in current experiments. Since we will have an additional content page, we will add these results and discussions in the camera-ready version. Finally, we note that although this work focuses on the theoretical convergence analysis of Q-learning based Whittle index under the neural network function approximation, it might be promising to implement it in real-world applications to fully leverage the strong representation ability of neural network functions, which serves as future investigation of this work.
**Your Question #1:** Since two-time-scale stochastic approximation has been extensively studied recently, could the authors clarify the novel main technical contributions of this paper?
**Our Response:** Thank you for this question and providing us an opportunity for clarifying. Two-timescale stochastic approximation (2TSA) has indeed been extensively studied as the references cited in our work [3, 18, 19, 20, 21, 38, 57] and many others. However, most of the current works focus on either linear or nonlinear with i.i.d noise [3, 18, 19, 21, 38, 57]. This may not be the real-world cases for reinforcement learning since the noise observed in each iteration is often not i.i.d. but Markovian. To our knowledge, the only work that considered Markovian noise is [20], which leveraged the same techniques in i.i.d noise setting as [19], to explicitly characterize the covariance between the error caused by Markovian noise and the parameters’ residual error, leading to the convergence analysis much more intrinsic. Moreover, the analysis in [20] relies the strong monotone and smoothness of the considered functions. Unfortunately, these properties do not hold for the general neural network function as considered in our paper. **Please kindly refer to the table in the pdf in General Response.**
In this paper, we addressed two challenges that remain open in most of previous works. First, we removed the addition projection step introduced to stabilize the reinforcement learning due to Markovian noise. Second, we come up a new manner to deal with the non-linear neural network functions in a two- timescale stocahstic approximation framework, which is to deal with a local linearized neural network function first and then characterizing the error between the local linearization and the original neural network function.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response and clarifying some of my questions. I would like to increase the rating.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your acknowledgement and raising the rating of our paper. Much appreciated! | Summary: This paper studies Whittle index-based Q-learning with neural network function approximations restless multi-armed bandits (RMAB) problem, which is a model-free low-complexity reinforcement learning (RL) heuristics for RMABs. Since state-action space of RMABs is exponentially growing with the number of arms, common Q-learning techniques as well as tabular Q-Whittle algorithm suffer from curse of dimensionality. Instead, the authors analyze a low-complexity neural network approximation method for Whittle index-based Q-learning under non-iid Markovian state-action observations, namely Neural-Q-Whittle. The authors formulate Neural-Q-Whittle as a nonlinear two-timescale stochastic approximation (2TSA) where the parameters of the neural network Q-function and the Whittle indices are mutually coupled and the former is updated on a faster timescale than the latter. Leveraging a Lyapunov function method, the authors provide a finite-time convergence analysis of Neural-Q-Whittle with non-iid Markovian data. The analysis involves characterizing the error between two Lyapunov functions: one for the neural network Q-function and one for the linear approximation of the neural network.
Strengths: This paper is original in the sense that it removes some of the limitations in the previous theoretical works on finite-time analysis of Whittle index, Q-learning, and nonlinear 2TSA and generalizes to neural network Q-function approximation in two-timescales under non-iid ergodic state-action process. The theoretical analysis and proofs seem technically solid and mostly clear.
Weaknesses: Since this work builds upon the limitations in the related past works, it would be easier to follow and appreciate the novelty and significance if you could provide a table comparing to related works in various aspects (eg. methods, assumptions, convergence rates, computational (time or space) and sample complexities). It could also help with presentation if you give an informal version of your main theorem beforehand or simplify Theorem 1 using asymptotic notation and state the full version in the appendix.
I believe it should also be emphasized more that your convergence result relies on diminishing error bound of linear approximation of overparameterized neural networks.
Although this is mainly a theory paper, it would be interesting to see the experimental comparison to other methods such as Q-Whittle-LFA and references 13, 22, 58.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: 1)I am not sure if I understand the sentence starting with "(Step 3)" between lines 204-207. Can you clarify what you mean exactly?
2) You might include the definition of "span" operator in equation 17 as a footnote.
3) Are h, g, y in Remark 4 supposed be h_0, g_0, y_0?
4) It is not exactly clear to me why it is desirable to update neural network parameters and the indices at two different time-scales. Do these parameters only converge under this case? Or do they converge faster compared to single timescale? Could you clarify the need for and the significance of two-timescale approximation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations are addressed within the text and mentioned as a future work in the conclusion (extension to multi-layer neural network).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:
**Weakness #1:** ...provide a table comparing to related works...simplify Theorem 1 using asymptotic notation ...
**Our Response:** Thank you for this insightful comment and suggestion. Since this paper is the first to provide a finite-time convergence rate of the complex setting that learning the Whittle index by leveraging Q-learning with neural network function approximation, it is not straightforward and fair to compare the convergence rates, computational (time or space) and sample complexities with those references listed in our paper. However, to further distinguish the difference between our papers and per the reviewer's suggestion, we can still compare those works in terms of the considered setting, such as the noise model, whether an approximation was used or not, what is the timescale of the algorithm, and whether it aims at learning Whittle index. We summarize these in a table (**Please kindly refer to the pdf in the global response** ). As for simplifying Theorem 1, we will end up with $\mathcal{O}(1/k^2)+\mathcal{O}(1/k^{2/3})+c$ with c being a constant value depending on $m$ and it goes to 0 as $m\rightarrow \infty$. However, we believe that presenting a full version in the main paper helps to better explain each term at the r.h.s. of Eq. (17). Since we will have an additional content page, we can include these discussions in the camera-ready version of this paper.
**Weakness \#2:** ...emphasized more ...overparameterized ...
**Our Response:** Thank you for this valuable suggestion. Indeed, the error bounds of linearization with the original neural network functions are controlled by the overparamterization value $m$, which has an impact on the global convergence as clearly shown in Theorem 1. We will emphasis this more in the camera-ready version.
**Weakness \#3:** ... experimental comparison ...
**Our Response:** Thank you for this suggestion. We indeed provided the comparison with Q-Whittle-LFA [57], WIQL [8] and QWIC [23] in supplementary material F (Figure 4 on page 26 in supplementary materials). The reason that [13, 22, 58] are not compared is that our proposed Neural-Q-Whittle in this paper aims to leverage Q-learning with neural network function approximation to learn Whittle index, which involves two coupled parameters, i.e., the Q function values and the Whittle indices. However, the algorithms in [13, 22, 58] are only Q-learning (or TD learning) with neural network function approximations, which only has one single parameter, i.e., the Q function values. To this end, we can only compare with those benchmarks which aim at learning Whittle indices. Based on the reviewer's suggestion and since we will have an additional content page, we could add/move the experimental results in Figure 4 to the main paper in the camera-ready version.
**Your Question #1:** ...Step 3...
**Our Response:** Sorry for the confusion. We would like to explain this with the aid of Figure 1 on page 6 in the main paper. Our goal is to characterize the finite-time convergence of the two-timescale stochastic approximation (2TSA) defined in Eq. (9) where $h$ and $g$ are related with the true neural network function $f$. To do so, we define the Lyapunov function $M(\pmb{\theta}_k, \lambda_k)$ as in Eq. (15). However, it is challenging to directly finding the global optimum of the corresponding nonlinear equations due to the nonlinear neural network parameterization of Q-function in Neural-Q-Whittle. To tackle this challenge, we first provide a linearization $f\_0$ of the original neural network function $f$ and define a new Lyapunov function $\hat{M}(\pmb{\theta}\_k, \lambda\_k)$ as in Eq. (16). We then can study the convergence rate of the nonlinear 2TSA using this modified Lyapunov function with smoothness properties guaranteed due to linearization. Theorem 2 provides the convergence results for the surrogate linearized function $f\_0$, which assumes that the 2TSA updates in Eq. (9) are based on the $f\_0$ function. However, two coupled parameters $\pmb{\theta}$ and $\lambda$ in (9) are updated with respect to the true neural network function $f$, not $f\_0$. Hence, this further requires us to characterize the approximation errors between $f$ and $f_0$. When adding back this error to the result in Theorem 2, we finally get the desired result in Theorem 1.
**Your Questions #2 and #3:** ..."span"... $h\_0, g\_0, y\_0$
**Our Response:** Thank you for this suggestion and pointing out this typo. Yes, it is $h_0, g_0, y_0$, and we will fix them in the camera-ready version.
**Your Question \#4:** ... two different time-scales...
**Our Response:** Thank you for this insightful question. First, we would like to point out that it is not necessary to be two-timescale parameters. However, in practice, for an update with two coupled variables with different sensitivities, it is usually empirically better to make one update quicker than the other, which has been widely studied as in [3,8, 19, 20, 21, 38] and reference therein. As in our setting, the Whittle index depends on the value of Q functions, and thus it is natural to make Q functions update quicker than the Whittle index updates. This technique has also been used in [3, 57]. Second, from the perspective of theoretical performance analysis, this has been widely and theoretically studied in two-timescale stochastic approximation (2TSA) literature, e.g., [18, 19, 20, 21, 38], with two learning rates, i.e., $\eta_{n,k}=o(\alpha_{n,k})$ (see lines 121 and 119). Indeed, by controlling these parameters as in two-timescale and as defined in Theorem 1 (line 212) , we achieve the best-known convergence speed $\mathcal{O}(1/k^{2/3})$.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking time and effort in addressing my questions and concerns and clarifying some of the confusing points. I am satisfied with your response and will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your acknowledgement and keeping the positive rating of our paper. Much appreciated! | Rebuttal 1:
Rebuttal: The attached Pdf contains a table for **Reviewer KAmk** and **Reviewer rZoR**, and a figure for **Reviewer rZoR.** The detailed response to the corresponding comments are provided below in our rebuttal to each reviewer.
Pdf: /pdf/7eb7d09e27fac333bdbca0278671abfdd80d4e51.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides a finite-time analysis of Neural-Q-Whittle. The authors propose Neural-Q-Whittle, a novel Whittle index-based Q-learning algorithm with neural network function approximation for RMAB. Their analysis leverages a Lyapunov drift approach to capture the evolution of two coupled parameters, and the nonlinearity in value function approximation further requires us to characterize the approximation error. They also conduct experiments to validate the convergence performance of Neural-Q-Whittle, and verify the sufficiency of their proposed condition for the stability of Neural-Q-Whittle.
Strengths: 1. This paper provides a non-asymptotic convergence rate analysis of Neural-Q-Whittle with two coupled parameters updated in two timescales under Markovian observations without the extra projection step.
2. The authors propose Neural-Q-Whittle, a novel Whittle index-based Q-learning algorithm with neural network function approximation for RMAB.
3. The authors establish the first finite-time analysis of Neural-Q-Whittle under Markovian observations. Their analysis leverages a Lyapunov drift approach to capture the evolution of two coupled parameters, and the nonlinearity in value function approximation further requires us to characterize the approximation error.
4. The writing is good.
Weaknesses: Compared with existing works [13, 22, 58] for Q-learning with neural network function approximation, the technique contribution of this paper is kind of limited. It will be better if the authors could include additional parts to clarify the technical contribution of the paper, compared with prior works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the Weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:
**Weakness #1:** Compared with existing works [13, 22, 58] for Q-learning with neural network function approximation, the technique contribution of this paper is kind of limited. It will be better if the authors could include additional parts to clarify the technical contribution of the paper, compared with prior works.
**Our Response:** Thank you for this insightful comment and providing us a chance to further clarify the contributions of this paper.
Compared with existing works [13, 22, 58] for Q-learning with neural network function approximation, the first fundamental difference of our proposed Neural-Q-Whittle lies in the algorithm framework. Existing works [13, 22, 58] aimed to learn Q-function values with neural network function approximation, while our proposed Neural-Q-Whittle aims to learn the Whittle index for restless multi-armed bandits (RMAB) problem by leveraging Q-learning with neural network function approximation. In particular, our Neural-Q-Whittle involves a two-timescale update between two coupled parameters, i.e., Q-function values $Q(s,a), \forall s,a$ and Whittle indices $\lambda(s), \forall s$ as defined in Eq. (4)-(5). **This renders existing finite-time analysis in [13, 22, 58] not applicable to our Neural-Q-Whittle** due to the fact that [13, 22, 58] only contained a single-timescale update only on Q-function values $Q(s,a), \forall s, a.$ **Hence, it requires a fundamental different technique (i.e., two-timescale analysis) to establish the finite-time convergence of our proposed Neural-Q-Whittle with two-coupled parameters.**
Second, due to the Markovian observations of Q-learning at each iteration, conventional single-timescale update Q-learning with neural network function approximation [13, 22, 58] **required an additional projection step for the update of parameters of neural network function** so as to guarantee the boundedness between the unknown parameter at any time step with the initialization, which stabilizes the updates. However, this in some cases is impractical. **Hence, how to remove this additional projection step is an open research problem in the community.** One possible way to remove the projection step is to treat the Q-learning (or TD learning) update as one-timescale stochastic approximation (SA) as in [15, 47]. However, these results only apply to Q-learning with linear function approximation. To the best of our knowledge, the results for single-timescale Q-learning with neural network function approximation is even unknown, let alone the two-timescale Neural-Q-Whittle learning that is considered in this paper. Therefore, a natural question that arises is: **Is it possible to provide a non-asymptotic convergence rate analysis of Neural-Q-Whittle with two coupled parameters updated in two timescales under Markovian observations without the extra projection step?** Indeed, we provide an affirmative answer to this question in this paper.
With that being said, our major contribution in this paper is to establish the first-ever finite-time analysis of Neural-Q-Whittle under Markovian observations. **Due to the two-timescale nature for the updates of two coupled parameters (i.e., Q-function values and Whittle indices) in Neural-Q-Whittle, we focus on the convergence rate of these parameters rather than the convergence rate of approximated Q-functions as in [13, 22, 58].** Our key technique is to view Neural-Q-Whittle as a two-timescale stochastic approximation (2TSA) for finding the solution of suitable nonlinear equations. Different from recent works on finite-time analysis of a general 2TSA [20] or with linear function approximation [57], the nonlinear parameterization of Q-function in Neural-Q-Whittle under Markovian observations imposes significant difficulty in finding the global optimum of the corresponding nonlinear equations. To mitigate this, we first approximate the original neural network function with a collection of local linearization and focus on finding a surrogate Q-function in the neural network function class that well approximates the optimum. Our finite-time analysis then requires us to consider two Lyapunov functions that carefully characterize the coupling between iterates of Q-function values and Whittle indices, with one Lyapunov function defined with respect to the true neural network function, and the other defined with respect to the locally linearized neural network function. We then characterize the errors between these two Lyapunov functions. Putting them together, we prove that Neural-Q-Whittle achieves a convergence in expectation at a rate
$\mathcal{O}(1/k^{2/3})$, where $k$ is the number of iterations.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response and clarification.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your acknowledgement and keeping the positive rating of our paper. Much appreciated! | null | null | null | null | null | null |
Visual Programming for Step-by-Step Text-to-Image Generation and Evaluation | Accept (poster) | Summary: This paper makes two contributions. VPGen is a T2I generation framework that first generates object/count, then layout, and finally image. VPEval is a T2I evaluation pipeline that provides a more comprehensive analysis correlated to human. Authors demonstrate that the step-by-step VPGen approach generates images more aligned with the input text compared to stable diffusion and other baselines. This is evaluated using the newly proposed VPEval scores.
Strengths: The proposed VPGen decomposition demonstrates better controllability for image generation, especially in terms of following the exact object count and spatial relation from text prompts. The new VPEval score offers a more comprehensive evaluation beyond just image quality. I believe this score will be useful for evaluating controllable generation.
Weaknesses: Result section lacks the standard evaluation metric for image generation such as the FID and IS score. While the focus of this paper is on better controllability, and the proposed VPEval is clearly a better choice, I still like to see some benchmark on the image quality and diversity.
First step of VPGen seems limited. In the examples provided, text has to include the exact number for each object. So is the first step just doing text parsing? Will it still work if I give it a sentence like “a small cozy office room”?
Also the motivation for training a LM is not clear to me. Many existing works can generate the layout bounding boxes directly, what is the extra benefit of training a LM? Can you do interactive editing?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) It will be great if authors can also provide the standard evaluation benchmarks.
(2) Can authors explain their design choice for using LM in VPGen?
(3) More visualizations results will be appreciated.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful feedback and for pointing out our strengths in providing useful image generation and evaluation frameworks. We hope to address your questions and concerns below.
**W1. Standard image quality metric (FID).** Please see the general response.
**W2-1. Prompts without an exact number of counts.** Thanks for the suggestion, users can indeed provide prompts without specific numbers. We find that VPGen can successfully generate layouts/images even when prompts do not have explicit counts. As shown in Figure B (e) in the attached PDF, our VPGen generates two Pikachus from the prompt “Pikachus on the table”.
**W2-2. Prompts where details are unspecified.**
Regarding your specific prompt “a small cozy office room”, we show two images from two different checkpoints (Flickr30k vs. Flickr30k+COCO+Paintskills) in Figure C (c) in the attached PDF. Note that the bounding boxes of the Flickr30k entities dataset are explicitly grounded to the Flickr30k captions, while the COCO captions and bounding boxes are annotated separately. This results in two different layout generation styles: 1) COCO-style - LM generates object layouts even if they are not explicitly mentioned in the prompt; 2) Flickr30k entities style - LM only generates objects that are explicitly mentioned in the prompt, and the downstream image generation model (GLIGEN) generates the image details. Users can construct their text-layout grounding style for their use cases. We add visualizations of such prompts to the paper.
**W3 & Q2. Motivation for training an LM for VPGen.** In our initial experiment, we tested ChatGPT to generate spatial layouts by showing in-context examples and found the generated layouts were often inaccurate or not meaningful. For example, ChatGPT often generates a list of bounding boxes that is the same size as the entire image (e.g., `[object 1 (0,0,99,99), object 2 (0,0,99,99) object3 (0,0,99,99)]`). We conjecture that this is because their training corpus might not include a lot of bounding boxes. Then we decided to collect text-layout annotations and finetune a language model to generate spatial layouts from text prompts.
For a quantitative comparison, we implement VPGen with ChatGPT-3.5-turbo with 36 in-context examples covering different skills and compare it with (open-sourced) Vicuna13B trained on Flickr30k+COCO+PaintSkills. As shown in Table B, Vicuna13B-based VPGen shows higher skill-based and open-ended VPEval accuracies than ChatGPT-based VPGen. We will add this interesting result to the paper.
Table B. ChatGPT vs. Vicuna13B.
| Model | VPEval Skill-based Acc. (%) | VPEval Open-ended Acc. (%) |
|-----------|------------|-------------|
| ChatGPT (36 examples) + GLIGEN | 42.2 | 66.3 |
| Vicuna13B + GLIGEN | **47.0** | **70.0** |
**Q3. More Visualizations.** Thanks for the suggestion. In the attached PDF, we provide different visualization results covering unseen objects (Figure A), counting (Figure B), challenging/complex prompts (Figure C), and layout error categorization (Figure D). We will include these and more visualizations in the paper.
---
Rebuttal Comment 1.1:
Comment: I thank authors for the detailed response and really appreciate their effort. Authors clarified many points and added crucial experiments in their rebuttal. Most of my concerns have been addressed. One comment is that I agree with other reviewers the evaluation pipeline might not be robust in some cases. I do encourage authors to list out the possible failure cases in their updated version. The overall quality has improved and I raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thanks for your supportive and positive discussion! We are glad that you appreciated our clarifications and additional experiments.
Yes, following your suggestion, we will definitely include more qualitative examples in the final version. | Summary: This paper proposes a visual-program-based evaluation method VPEval to evaluate text-to-image models. Their method relies on LLM which can call different expert models in different tasks like object detection, OCR, spatial understanding, etc. to evaluate the consistency between the text and the generated image. The human evaluation shows that compared to traditional metrics, their evaluation method aligns with human better. Besides, they introduce a novel interpretable step-by-step text-to-image generation framework VPGen to improve the consistency between generated images and texts in count, spatial, and scale skills.
Strengths: 1. The paper writing is clear
2. Their framework VPGen is simple and novel. I like the idea to first train a LLM that generates image high-level layouts from the given prompt, then generates the image based on the layout.
3. VPGen is effective with improved consistency in counting/spatial/scale relationships.
4. The evaluation framework VPEval based on visual program is also novel and has better interpretability compared to previous methods. Besides, it aligns with human judgement better.
Weaknesses: 1. This paper is highly related and based on visual programing. Although the author cites the related papers like Visual Programming [11] and ViperGPT [12] in the related work section L110, a brief introduction of visual programming and a discussion of the difference between [11,12] and this paper are missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. As the evaluation is based on ChatGPT, it would be good to include a API cost analysis.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors listed and discussed the limitations including the suboptimal performance with non-English prompts and unnatural images. Potential broader impacts like generating harmful images are also discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful feedback and for pointing out our strengths of the effectiveness of VPGen and VPEval, as well as clear writing. We hope to address your questions and concerns below.
**W1. Introduction of Visual Programming.** We’d like to bring your attention to Sec 1, where we explain how the recent LLM+visual modules (e.g., Visual Programming and ViperGPT) work (L38-31; “LLMs can also tackle certain vision-and-language tasks such as visual question answering and visual grounding, by generating visual programs (VP) that can control external visual modules and combine their outputs to get the final response”), and how our frameworks are different from them (L31-34; “However, no prior works have shown an analysis of combining LLMs and different visual modules for the challenging text-to-image (T2I) generation task. Our work proposes two novel interpretable/explainable VP frameworks combining LLMs and visual modules for T2I generation and evaluation”). In the next version, we will further expand the introduction of the Visual Programming framework and discuss the difference between the existing works and ours to make it more clear.
**Q1. VPEval API Costs.** Thanks for the suggestion. VPEval costs about $0.007 (less than 1 cent) to evaluate one image with the ChatGPT 3.5 turbo API. We will add this information to the paper. Please also note that we will release all the generated programs from our evaluations for others to use (L225-227).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I appreciate the additional qualitative examples and quantitative evaluation. Given the good novelty and the simplicity of the methodology, I'll increase my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thanks for your supportive comments! We are glad that you appreciated our additional examples + evaluation (and novelty + simplicity of our methodology).
BTW, just a gentle note - the increase in the score (as you mentioned in your last comment) may not have gotten saved in the system.
Thanks again! | Summary: This paper extends previous work in the vision and language space that use visual programs as an intermediate step, to the problem of text to image generation and subsequently, its evaluation. It proposes VPGen, a neuro-symbolic method that is composed of specific modules that count objects, generates layouts using a LLM and then generates an image using GLIGEN, a layout-to-image model. This allows for interpretable inspection of the intermediate steps as well as allows for more controllability in the image generation process. The paper also proposes VPEval, a method to evaluate T2I models using specific modules that also generate interpretable programs inspecting specific skills such as counting, existence of object, spatial, scale and text rendering. The proposed method (VPGen) outperforms previous approaches on skills such as count, spatial and scale, demonstrating that the approach allows for better fine-grained control over the image generation process.
Strengths: ## Quality, Originality and Significance
* There have been several methods that were proposed over the last year for text to image generation, such as stable diffusion, Imagen, Parti, etc but the community still lacked an adequate method for evaluating the quality of the generated image in a way that is also interpretable. This paper provides both, an interpretable text to image generation pipeline that either outperforms or is competitive with existing (open source) state of the art text to image generation models, as well as a way to evaluate them.
* Improvements in layout generation by using an LM, allowing generalization to unseen concepts instead of depending on pre-defined class set broadens the scope of the type of prompts that can be incorporated into this interpretable pipeline.
* Human judgements and correlation analysis of existing evaluation methods compared with the proposed method demonstrates that VPEval is a suitable metric for T2I generation both in terms of interpretability while also being correlated with human judgement.
## Clarity
* The paper is very well written, and is very easy to follow.
* The paper together with the appendix provides sufficient information on implementation details, qualitative examples and analysis of programs and prompts.
Weaknesses: * Missing error analysis or categorization on the types of examples that VPGen fails to produce accurate images (both when the layout generator fails, as well as when GLIGEN fails would be useful).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * Does the domain of the image-text data used for fine-tuning the model in VPGen matter at all (for example if it were synthetic images vs real world images)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful feedback and for pointing out our strengths of providing a strong interpretable T2I model, providing a much-needed interpretable T2I evaluation method, and having a well-written and detailed paper. We hope to address your questions and concerns below.
**W1. Additional error category analysis.** Thanks for the suggestion, this is a good idea. We additionally investigated the failure cases of layout generation and show two error categories: 1) generating objects unspecified from the text prompt, and 2) not generating some objects mentioned in the text prompt. We show the error analysis results in Figure D in the attached PDF. We will include these examples/categorizations in the paper.
**Q1. Training Domain Impact.** Our layout generation module (Vicuna 13B) in VPGen is trained on a mix of both simulated (PaintSkills dataset) and real-world images (MSCOCO/Flickr30K). We conduct an ablation study to determine if there is a difference between simulated/real-world images. We train the Vicuna on only PaintSkills and only Flickr30K and compare their results on both skill-based and open-ended prompts. For skill-based prompts, PaintSkills training shows a slightly higher average accuracy (46.8) than Flickr30K training (44.8). For open-ended prompts, PaintSkills training gets a VPEval score of (66.4), lower than Flickr30K training (71.1). This indicates that training on synthetic data can inject knowledge of specific skills such as counting and spatial relation understanding, but it is hard to cover diverse objects and attributes only with synthetic data. We will add these interesting findings to the paper. | Summary: This paper proposes a text-to-image (T2I) generation approach along with a new evaluation framework. This can be summarized in Figure 1. First, VPGen (Sec. 3) breaks T2I down into 3 steps, with an LM to generate a “program” of objects and layouts that is then fed into the final generation module. Second, VPEval (Sec. 4) is an interpretable evaluation for T2I that invokes diverse visual modules to evaluate the generated image and the prompt’s components.
Strengths: S1: Both VPGen and VPEval are sound. It leverages existing tools to make the T2I process more controllable (via layout) and interpretable.
S2: The paper is well-written.
S3: Solid experiments. The paper compares various image generation models (Table 1) and provides results based on particular skills (skill-based prompts) and their combination (open-ended prompts). Further, the paper shows a good correlation of VPEval to human judgments. Though for this point, I do have a concern since the VPEval is “gamed” to showcase VPGen (see Weaknesses).
Weaknesses: W1: It is unclear how generalizable/robust the proposed T2I approach and evaluation protocol are to “non-canonical” prompts shown in Figure 1. For VPGen, what if the prompts are really complex with multiple objects that interact with one another? What if the prompts do not include the numbers of objects (e.g., Pikachus on the table)? Similarly, since VPEval is geared toward showcasing VPGen, some aspects of evaluation are missing such as object attributes (L206) (e.g., a purple Pikachu and a pink Pikachu).
W2: The novelty of both VPGen and VPEval should be highlighted further. Perhaps expand L99-105 in more detail and demonstrate L101-102 empirically. Furthermore, it would also be appreciated how the VPEval is compared to VQA-based approaches (L35) like TIFA and SeeTRUE (https://arxiv.org/abs/2305.10400), which in my opinion are also interpretable.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful feedback and for pointing out our strengths - a well-written paper and solid experiments. We hope to address your concerns below.
**W1-1. Challenging/Complex prompts.** Please see the general response.
**W1-2. Prompts without an exact number of counts.** Thanks for the suggestion, users can indeed provide prompts without specific numbers. We find that VPGen can successfully generate layouts/images even when prompts do not have explicit counts. As shown in Figure B (e) in the attached PDF, our VPGen generates two Pikachus from the prompt “Pikachus on the table”.
**W1-3. Does VPEval check attributes?** Yes. While VPEval’s skill-based prompts do not have a specific split for ‘attribute’, our open-ended evaluation prompts do include prompts that have object attributes (e.g., “A red motorcycle parked by paint chipped doors.” has attributes ‘red’, ‘parked’, and ‘paint chipped’). In open-ended evaluation, the object attributes are usually evaluated with the VQA module. Figure 4 in the supplementary pdf also shows examples with attributes. For the mentioned example prompt “a purple Pikachu and pink Pikachu”, our program generator LM (ChatGPT) generates a program that checks for the pink and purple color attributes, as shown in Code A below.
Code A. VPEval generated evaluation modules for the prompt “a purple Pikachu and pink Pikachu”.
```python
objectEval(image, 'Pikachu')
vqa(image, 'is there a purple Pikachu?', 'yes,no', 'yes')
vqa(image, 'is there a pink Pikachu?', 'yes,no', 'yes')
countEval(objDet(image, 'Pikachu'), '==2')
```
**W2-1. Baseline (closed-vocab layout generator).** We will expand the discussion of text-to-layout-to-image generation (L99-105) with some more details. Regarding L101-102 _“However, the previous approaches train a new layout predictor module from scratch, so they are limited to predicting layouts with a predefined number of classes and cannot place new objects unseen during training"_, we experiment with a layout generator with predefined 80 object classes of COCO (unlike our open-vocabulary LM Vicuna 13B). As shown in Figure A (d) in the attached PDF, the closed-vocabulary layout generator fails to generate layouts with the unseen object “Pikachu” and guides GLIGEN with the wrong layout (Note: The Pikachu in the background is generated by GLIGEN, and there is no layout bounding box for the Pikachu).
**W2-2. VPEval compared to TIFA / SeeTRUE.** While both TIFA/SeeTRUE are also based on atomic questions and thus considered interpretable, the main difference between TIFA/SeeTRUE and our VPEval is the adoption of diverse evaluation models. While TIFA/SeeTRUE rely on a single evaluation module (VQA) that cannot cover different T2I skills, our VPEval adopts diverse evaluation models, including object detection, counting, OCR, etc. Sec. 5.4 shows that our diverse evaluation modules of VPEval achieve higher human correlation than TIFA on both skill-based and open-ended prompts. VPEval can also provide visual explanations (in the form of bounding boxes) and error messages, whereas TIFA/SeeTRUE cannot. Please also note that the SeeTRUE paper appeared on Arxiv on May 17, 2023, the same date as NeurIPS 2023 full paper submission deadline; we will cite SeeTRUE in the paper. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback and for recognizing our strengths:
- addressing an important/foundational problem in text-to-image generation (aEEd)
- developing strong/interpretable/useful text-to-image generation and evaluation frameworks (aEEd, 1fZw, GZ2e, PtVL, bJid)
- providing better spatial controllability for text-to-image generation (aEEd, 1fZw, PtVL, bJid)
- having generalizable/scalable frameworks (aEEd, GZ2e)
- providing solid experiments (1fZw, GZ2e, PtVL)
- and being well-written (1fZw, Gz2e).
**Attached figure-PDF**
In the attached PDF, we include visualizations that cover
- unseen objects (Figure A)
- counting beyond 4 (Figure B)
- challenging/complex prompts (Figure C)
- and layout error categorization (Figure D)
to address your comments.
**Common Answers**
**Reviewer aEEd W1. and bJid W1. - image quality metric (FID).** Thanks for the suggestion. In Table A, we compare our VPGen (Vicuna13B+GLIGEN) to its backbone Stable Diffusion (SD) v1.4, in FID (30K images of COCO val 2014 split) as well as VPEval Acc. Both VPGen checkpoints show better Skill-based Acc than SD v1.4 while achieving comparable Open-ended Acc. and FID.
In FID (lower the better), we find VPGen (Flickr30k) < SD v1.4 < VPGen (Flickr30k+COCO+Paintskills). We think a bit of increase (but still reasonably good) in the FID of the “Flickr30k+COCO+Paintskills'' checkpoint is because the layouts of PaintSkills are different from those of natural scenes (COCO and Flickr30k). We will add the results to the paper.
Table A. VPEval accuracy and FID.
| Model | VPEval Skill-based Acc. (%) $\uparrow$ | VPEval Open-ended Acc. (%) $\uparrow$ | FID (COCO 30K) $\downarrow$ |
|---|:---:|:---:|:---:|
| SD v1.4 | 37.4 | 70.3 | 16.5 |
| VPGen (Flickr30k+COCO+Paintskills) | **47.0** | 70.0 | 20.1 |
| VPGen (Flickr30k) | 44.6 | **71.2** | **15.9** |
**Reviewer aEEd Q5-2. & Q6. and Reviewer 1fZw W1-1. Challenging/complex prompts.** As requested, we show images generated with the complex prompts (1) “A realistic photo of a Pomeranian dressed up like a 1980s professional wrestler with neon green and neon orange face paint and bright green wrestling tights with bright orange boots” (from DrawBench [24]) and (2) “a circle with 3 cm radius and a circle with 6 cm radius”, in Figure C (a,b) in the attached PDF.
VPGen understands the important parts of the prompts (e.g., generating ‘realistic’, ‘Pomeranian dog’, bright orange boots’; generating layouts of two circles in different sizes), but misses some aspects (e.g., the bigger circle is not twice the size of the smaller circle). This is probably because the training prompts (COCO, PaintSkills, Flickr30k) do not include many prompts written in these styles. We believe that scaling datasets with diverse sources can further improve VPGen.
Pdf: /pdf/ff4aede9d0548459cfd666e1deaeb2a2a8d5e965.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper delves into the intersection of large language models (LLMs) and their applications in vision-and-language tasks, specifically focusing on text-to-image (T2I) generation. The authors identify a gap in the existing literature: no thorough analysis has been conducted on the synergy of LLMs and various visual modules for "complex" T2I generation tasks.
To address this they propose two novel frameworks: VPGEN: This framework offers a step-by-step approach to T2I generation, segmenting the process into three distinct stages: object/count generation, layout generation, and image generation. They use Vicuna to fine-tune to manage the first two stages, and the results demonstrate improved control over layout creation. The final stage, image generation, incorporates existing models like GLIGEN. Notably, VPGEN’s design capitalizes on the inherent knowledge of pre-trained LLMs, granting it the ability to recognize objects that have not been predefined—surpassing the capabilities of older layout-guided T2I techniques.
VPEVAL: This authors propose this for evaluation of the generated images. Unlike conventional T2I evaluation methods which primarily gauge visual quality and image-text alignment through a singular visual module, VPEVAL emphasizes interpretability and a multiple modules to evaluate the generated image. In essence, it employs evaluation programs that activate a variety of visual modules, to distinct T2I skills.
The papers findings indicate that VPGEN+GLIGEN combination showcases relatively better performance, especially when precision in layouts and spatial relationships is paramount. The VPEVAL evaluation method seems to aligns with human assessment.
In summary the contributions are:
The introduction of VPGEN, an interpretable T2I generation framework that dissects the T2I process into three modules.
The proposal of VPEVAL, an evaluation framework for T2I tasks that enhances the explainability and thorough analysis by invoking diverse visual modules.
A detailed analysis of various T2I models, highlighting the superior layout control of VPGEN and the human-centric alignment of VPEVAL.
Strengths: Overall its a great work and the paper addresses some fundamental limitations in a unique way.
The paper addresses a very important problem in image generation realm. Most of the existing image generation models including the ones trained on massive datasets, are not good at generating images with spatial consistency, generating the correct number of objects , or understanding the size of the objects. This bias has been highlighted by various papers since 2018 and is still existing in modern generative models. The paper proposes a solution to the same by combining LLMs fine tuned on text to layout pairs and layout to image generation model. This fine tuning provides the LLMs with the ability to understand the spatial etc relationships between the generated objects. Thus removing a limitation from the prior results.
The proposed VPEval combined several modules to evaluate the generated quality of the images and also interpretable due to the generated programs. Using such visual programming is novel.
Adaptability - as the modules in image generation and image evaluation improve the proposed approach would also improve in performance.
Offering visual+textual explanations is a strength as it increases the interpretability of the evaluation.
Weaknesses: The proposed 5 evaluation modules might still not capture the generated image quality and other complex semantics. Other image quality could be incorporated and evaluated.
The paper emphasizes heavily on the two step generation - object count and layout generation , but fails to conduct any ablation studies to prove the effectiveness of this pipeline. Since the process is decomposed into different steps, the interplay and coherence between these steps are vital. Errors or inconsistencies in earlier steps (like object/count generation) could cascade and affect the final image's quality.
Relying on bounding boxes to represent layout may be simplistic. The boxes encode basic spatial relations for sure but might miss more detailed pose, occlusion, and depth information.
Associating object names in texts to layouts could be unreliable for ambiguous or synonymous words. The model may lack grounding to map words to visual concepts.
For example, consider the word "bat" - this could refer to: A baseball bat A flying bat animal
Or the word "apple" could refer to: The fruit apple The technology company Apple
Without proper grounding between language and vision, the model may struggle to determine which visual concept is being referred to based on just the word alone.
Some detailed examples to illustrate this:
The text "a man holding a bat" is ambiguous - is this referring to a baseball bat or a flying animal bat? The model may wrongly depict it without the proper grounding. The text "a logo of an apple" could wrongly depict the fruit when the technology company was meant. Synonyms like "couch" vs "sofa" would need to be mapped to the same visual concept. So in summary, relying purely on language without grounding it properly to visual concepts can lead to ambiguity in mapping words to the intended visual representations. Providing more context and grounding is important to resolve this.
Granularity of Layout Representation: VPGEN decomposes bounding box coordinates into a [0,1] scale and quantizes them into 100 bins. Such discretization can potentially lead to a loss of finer details in bounding box representation, which might impact the accuracy of object placement in the generated images.
2 human evaluators are slightly less to conclude that the results align with human and also the setting of human evaluation is a bit unclear. Like humans evaluate it subjectively (esp. with open ended tasks) . So could the authors shed some light on this setting? Also given that the correlation doesnt seem very high for objects ( 63.7?)
Minor comments:
The word skill has been mentioned earlier without much context which might be ambiguous to the readers ( so just a minot presentation suggestion) .
Explaining what the paper means by "challenging" T2I tasks might be helpful in terms of better clarity and presentation.
Scaling the approach to generate complex high-resolution images with many objects and intricate relations may be difficult. The text-to-layout-to-image pipeline has limitations. Any further results to prove or disprove this might be helpful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How would the authors compare their approach with Visual chatgpt ( https://arxiv.org/pdf/2303.04671.pdf) . This seem to have a image generation module?
2. Does quantizing the bounding box coordinates into discrete bins loses precision? does this lead to misalignment between the predicted layout and actual image content.
3. Unseen Objects: The ability to generate layouts of objects not seen during training (e.g., 'pikachu') can be an advantage, but it also raises questions about the model's ability to accurately represent unfamiliar objects in space. How well can it handle a completely novel entity in terms of spatial characteristics, especially when combined with other objects?
4. How many in-context examples are used in the for the VPEval task?
5. Can the authors provide results for count greater than 4 ( fig 5) . and can the model generate images like " generate a circle with 3 cm radius and generate a circle with 6 cm radius , essentially saying one smaller than the other ?
6. The model has a limited encoding capacity to represent all possible object combinations and numbers. So to understand if an unusual or out-of-distribution combinations at test time could confuse the layout prediction stage, can the authors provide some complex compositional image generation results?
7. what is the inference time?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have spoken about the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful feedback and for pointing out our strengths in addressing important issues in the image generation community regarding spatial control and having an adaptable/interpretable evaluation framework.
**W1. Image quality metric (FID).** Please see the general response.
**W2. Error propagation analysis.** We analyze the performance of each step in the VPGen pipeline to determine how much error gets propagated.
1. _Object Recall_: The ratio of whether correct objects are included in the generated layouts.
2. _Object Count_: The ratio of the correct number of objects is included in the generated layouts.
3. _Layout Accuracy_: VPEval accuracy of the generated layouts (using the generated layouts as the object detection results).
4. _Image Accuracy_: VPEval accuracy of the final image by GLIGEN (in Table 1).
As shown in Table B, the layout accuracy is much higher than image accuracy, especially for Count/Spatial skills. This indicates that the major error source is the image rendering step (with GLIGEN), and using a more accurate layout-to-image generation model in the future would improve the accuracy of the VPGen pipeline. We will incorporate the results in the paper.
Table B. Step-wise error analysis of VPGen. For the ‘Spatial’ skill, “front” and “behind” splits are skipped in this table since Vicuna13B does not generate depth information.
| Skills | Vicuna13B Object Recall (%) | Vicuna13B Object Count (%) | Vicuna13B Layout Accuracy (%) | Layout-to-Image (GLIGEN) Accuracy (%) |
|---|:---:|:---:|:---:|:---:|
| Object | 99 | 99 | 99 | 97 |
| Count | 99 | 99 | 99 | 72 |
| Spatial | 98 | 98 | 88 | 34 |
| Scale | 93 | 93 | 38 | 23 |
**W3. Bounding box layout.** While we can also guide image generation models with pose keypoints/segmentation/depth map, in VPGen, we choose the bounding box layout format because of its efficiency; bounding boxes require a much smaller number of tokens to represent compared to other formats (e.g., a xyxy-format bounding box can be represented with 4 tokens; while a 64x64 segmentation map requires 4096 tokens).
Please also note that our VPGen framework can be extended to pose/occlusion/depth guidance when we find an efficient way to generate them with LM and have access to an image generation model that can take such information as input.
**W4. Words can be ambiguous.** We agree that the text prompts do not always have detailed information. While we focus on introducing the first VP framework for text-to-image generation, we suggest several ways to address the ambiguity below. As these ideas are involved with new method design/data collection/experiments, we leave this to future work.
1. Multi-turn interaction
```
User: “Draw a bat”
System: “The word ‘bat’ is ambiguous, do you mean an animal or a baseball bat?”
User: “animal”
```
2. Show an exemplar image using a multimodal LM
```
User: “Draw a bat like <image>”
```
**W5 & Q2. Granularity of layout.** We follow previous LM-based object detection to normalize/discretize the boxes [48, 49] (L137). In Table C, the 100 and 1000 bin settings show almost identical accuracies, suggesting that the current 100 bin discretization does not hurt the accuracy of object placements. We will add the results to the paper.
Table C. 1000 vs. 100 bins.
| # Bins | Avg. VPEval skill-based Acc. | VPEval open-ended Acc. |
|---|:---:|:---:|
| 1000 | 47.8 | 69.1 |
| 100 | 47.0 | 70.0 |
**W6. Human eval setup.** For open-ended eval, we borrow the human evaluation scores (with two annotators) from TIFA [35]. Following TIFA, we use two human annotators for skill-based prompt evaluation as the prompts are short and straightforward (e.g., “two dogs”). Our annotators achieve high inter-annotator agreement with Cohen’s $\kappa$ and Krippendorff’s $\alpha = 0.85$ (> 0.8 indicates near-perfect [61; 62; 63], L304-306). Also, note that Spearman’s $\rho=63.7$ of VPEval-Object is higher than other methods in Table 3.
**Minor comments - presentation suggestions.** Thanks for the suggestions. We will explain/clarify ‘skill’ and ‘challenging’ in the paper.
**Q1. Comparison to Visual ChatGPT.** Visual ChatGPT directly calls a T2I model API (SD) with a given prompt to generate an image. In contrast, in our VPGen, LM has a more important role of semantic layout parsing with two steps: 1) generating object counts and 2) generating layouts given object counts.
**Q2. Precision loss from discrete bins?** We find that using bins does not impact performance. Please see the answer to “W5 & Q2. Granularity of Layout Representation.”
**Q3. Unseen object spatial control.** As suggested, we generate images with a series of prompts to see how well it handles placing unseen objects (Pikachus / Link) in different spatial relations (on a table / holding). As shown in Figure A (a,b,c) in the attached PDF, our layout generation LM can place the unseen objects in the correct locations.
**Q4. In-context examples used for VPEval.** For a fair comparison with TIFA [35], we follow the same 12 prompts used in TIFA’s question generation for in-context examples for VPEval program generation. We will include this detail in the paper.
**Q5-1. Counts > 4.** We show the count skill results with 5, 6, and 7 in Table D. As expected, the trend continues from Fig 5, where higher counts are more challenging. We show the generation examples in Figure B (a,b,c,d) in the attached PDF. We will add the results to the paper.
Table D. ‘Count’ accuracy of VPGen.
| Counts | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
|---|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| VPEval Count Accuracy (%) $\uparrow$ | 85 | 80 | 66 | 55 | 48 | 35 | 28 |
**Q5-2. & Q6. Challenging/complex prompts.** Please see the general response.
**Q7. Inference time.** On a single A6000 GPU, VPGen takes around 6s (2s for Vicuna 13B + 4s for GLIGEN) to generate an image and VPEval takes <1s (typically <0.5s) per evaluation module. We will add this information to the paper.
---
Rebuttal Comment 1.1:
Title: Acknowledging Authors Rebuttal
Comment: I've carefully reviewed the authors' responses and I recognize the clarifications provided on my concerns. Their insight on "W4: Word Ambiguity" is particularly interesting. However, considering the potential for increased prompt complexity, such as indicated in the attached PDF referencing "generating unseen objects a, b", having only two human evaluators might fall short of ensuring comprehensive assessment. The reported inference times are commendable. I concur with the authors' perspective that expanding the dataset or fine-tuning methodologies might bolster the spatial accuracy of generated images ( big circle vs small circle radius). While there might be other potential other avenues to enhance this, dataset scaling indeed appears to be a viable solution. On the whole, this paper is great in its approach to addressing a fundamental challenge in a creative way. I enjoyed reading the paper, and the authors' dedication to the subject matter is great.
I am keen to read any updated version to observe the incorporated changes.
---
Reply to Comment 1.1.1:
Comment: Thanks for your supportive and positive discussions. We are glad that you enjoyed reading the paper and appreciated our clarifications. We will add these additional details in the final version. | null | null | null | null | null | null |
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter Augmentations | Accept (poster) | Summary: The proposer proposed a contrastive learning method for graphs. The goal is to learn representations for nodes in an unsupervised way, by maximizing mutual informations between the local representation and global representation of a graph, training a single encoder to learn node representations. The method proposes to use multiple different filters to learn representations with different emphasis on the graph structure (as seen through the spectral representation), and then combine those representations on downstream tasks, by learning the mixing coefficients in addition to a linear classifier.
The proposed method, FIGURe, being slower than the other methods it is compared to, when learning 512 representations, the authors also explore ways to produce more compact representations, 32 and 128 in width, using RFF to recover some of the lost accuracy on downstream tasks.
Strengths: * The results show improvement compared to other self-supervised approaches. Although FIGURe does not get the first place on every single benchmark, it performs very consistency across datasets, leading to the highest average performance.
* The versions of FIGURe with smaller representations are leading to good results as well, underlying the soundness of the approach followed by the authors to reduce the dimension of representation and then recovering part of the lost accuracy.
* Overall, the paper is well written, and easy to follow. The preliminary section is concise and yet very useful.
Weaknesses: * The study of the authors is limited to 1-layer GCN. It would be interesting to know how the method compares to others as the GNN architecture gets bigger (in particular deeper).
* The authors justify the need for learning representations of smaller sizes due to the prohibitive cost of contrastive learning, but this part lacks justification in my view: the mean epoch time does indeed increase compared to baselines (+50%), but we are missing details in the paper related to how many epochs it is trained on (in absolute and compared to others methods) - this would help justify the need for the lower dimensionality version and the RFF tricks used afterwards
* The authors present the shared encoder as an efficiency approach, but it seems from Table 4 that it is also increasing performance.
* In general, I would appreciate having more details on the experimental section, and in particular on the training recipe and the evaluation recipes - some of these questions will be asked in the "Questions" sections of this review - as this would help understanding if the comparison are fair (and help reproducing results)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Here are a bunch of details that I think need clarification in the experimental sections:
* How long is the method trained compared to the baselines ? Does the number of filter influence the training time (linearly?).
* How many filters does the method used compare to the baselines ? If it uses more, should we normalize those to make sure comparisons are fair ?
* How are the baselines evaluated ? Are those also using the learnable alpha mixing coefficients ? Are those baselines using the same number of filters ? If not, should we normalize those to make sure the evaluations are fair ?
* What happens if we use RFF on the 512 dimensions representations ? In particular, on the baselines ?
* What happens if we do not use RFF on the 32 dimensions and 128 dimensions representations ?
In several sections, the authors talk about the cost of having high number of dimensions and the cost of having several encoder instead of one shared encoder, would it be possible to put numbers on these costs in terms of time / storage ? It would help convey the need for the dimensionality reduction and RFF tricks.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First and foremost, we would like to express our gratitude for the time and effort you dedicated to reviewing our paper, providing us with constructive feedback. We're glad to learn that you found our paper well-written, with a concise and useful preliminary section. We highly appreciate your positive remarks about FIGURe's consistent performance across datasets and its demonstrated soundness in our approach, especially with reduced dimension representations. The acknowledgment of FIGURe's improvement over other self-supervised approaches further underscores the value of our contribution.
1. **Multi Layer Encoder:** Refer to global response.
2. **Time and Storage Cost**: For all the baselines, we used the same number of maximum epochs and the same early stopping criteria. The maximum number of epochs was set to 30,000 and the early stop patience was set to 20 epochs, except for the large datasets, for which the patience was set to 500 epochs. Due to the fact that each baseline was tuned over multiple hyperparameters, each having a different convergence criterion, it would be difficult to normalize the epochs. However, we made sure that we set the epochs to a large value such that the models generally converged to a solution. We build upon DGI. The training time of our model scales linearly with the number of filters. Since our model currently uses three filters for training, our training time is three times larger than that of DGI. We provide the number of epochs, mean epoch time, and total training time for ogbn-arxiv. We also report the storage cost of the representations from these methods. Please note that that the linear time relationship between DGI and FiGURe is not visible here due to the inclusion of batching and sampling time.
| Model / Dims| Num Epochs | Mean Epoch Time | Total time | Storage |
|-----------|--------------------------|-------------------------------|--------------------------|--------------------------|
| DGI / 512 | 3945 | 0.77s | 50.75mins | 330.75 MB
| FiGURe / 512 | 4801 | 0.92s | 73.62mins | 1.32GB|
| FiGURe / 128 | 4180 | 0.74s | 51.55mins | 330.75 MB|
| FiGURe / 32 | 3863 | 0.72s | 46.36mins | 82.69MB|
We see both the training time and storage saved by utilizing lower dimensions along with RFF.
3. **Shared Encoder Improvements:** We hypothesize that weight sharing among encoders results in embedding different filter representations in a shared subspace, thereby enhancing their suitability for learning a combined representation. This ultimately leads to improved features for downstream tasks and, in some cases, results in performance improvements. We will add this point in the final version of the paper.
4. **Normalize Comparisons with Baselines:**
1. Normalized number of parameters: One way to normalize the models is to ensure that they have roughly equal number of parameters. In the FiGURe model with shared encoder, the number of parameters used are exactly the same as the number of parameters used in DGI and GRACE. MVGRL and SUGRL use two encoder models, hence roughly have twice the number of parameters. Hence, the FiGURe model uses equal or fewer parameters.
2. Normalized filter inputs: Another way to normalize the inputs is to provide the same filter augmentations to all the baselines. However, there are non-trivial challenges in incorporating filter banks in the other baselines. Hence, we chose to incorporate them in DGI. We do believe that it should be possible to incorporate them in other baseline models as well and it is left for future research.
5. **RFF on 512 dimensions**: We report the result of using RFF on 512 dimensions. We notice that there is not much improvement in performance, when you already have sufficiently large dimensionsal embeddings.
| Cora | Citeseer | Squirrel | Chameleon |
|--------------|--------------|--------------|--------------|
86.84 (0.98) | 74.40 (1.30) | 51.86 (1.87) | 68.60 (1.57) |
6. **With and without RFF**:
In Table -3, we have shown how our method performs with 32 dimensions with and without RFF. One can observe that our method is better than or comparable to most other methods at 32 dimensions across datasets. This trend continues even after projecting the learnt embeddings via RFF to larger dimensions. Additionally, we report the same table but with 128 dimensions below. Similar observations can be made here as well.
| | RFF | cora | citeseer | squirrel | chameleon |
|---------|------------|--------------|--------------|--------------|--------------|
| DGI | $\times$ | 84.99 (1.36) | 72.22 (2.50) | 34.22 (1.47) | 49.82 (2.96) |
| | $\checkmark$ | 84.17 (2.11) | 72.65 (1.52) | 37.97 (1.41) | 57.72 (2.03) |
| Hassani | $\times$ | 85.31 (1.66) | 73.42 (1.63) | 36.92 (1.04) | 55.20 (1.70) |
| | $\checkmark$ | 84.61 (1.74) | 72.81 (2.13) | 38.73 (1.22) | 57.81 (1.80) |
| SUGRL | $\times$ | 71.49 (1.15) | 63.85 (2.27) | 38.04 (1.17) | 53.03 (1.73) |
| | $\checkmark$ | 71.40 (1.40) | 63.06 (2.22) | 43.24 (1.63) | 57.04 (1.78) |
| GRACE | $\times$ | 80.87 (1.49) | 62.52 (3.57) | 41.25 (1.32) | 63.14 (1.89) |
| | $\checkmark$ | 79.70 (1.91) | 64.47 (2.12) | 52.29 (1.81) | 68.90 (2.05) |
| FiGURe| $\times$ | 84.73 (1.13) | 73.07 (1.13) | 41.06 (1.51) | 59.08 (3.36) |
| | $\checkmark$ | 86.14 (1.13) | 73.34 (1.91) | 48.78 (2.48) | 66.03 (2.19) | | Summary: This work proposes a model and contrastive learrning method for acquiring a comprehensive specturm of graph representation by employing filters of various levels. They are adaptively aggregated with learnable weights for downstream supervision tasks. As a result, this approach performs well on both homophilic and heterophilic graphs. Additionally, by utilizing a random featrue map for kernel approximation, it demonstrates effectiveness with low computational cost. They employ a shared encoder for multiple filters, which not only reduces computational load compared to using independent filters for each, but also improves performane in some cases. Experimental results exhibit performance improvement on average.
Strengths: They conducted experiments on data with multiple characteristics, demonstrating that their approach works for both homophilic and heterophilic graphs. They proposed various methods for computational efficiency and provide evidence of the effectiveness through experiments. The contribution of their work is clearly described through abtract and introduction sections.
Weaknesses: This paper requires some improvements in terms of presentation, as it contains inconsistent notation (such as size of tilde and hat in the line 147 and 149) and deferred explanation for newly introduced symbols that is difficult to follow and numerous mathematical errors. The presence of these mathematical errors prompted a desire to check the implementation, which unfortunately was not provided by the authors. If these issues are addressed in the rebuttal and the necessary improvements are made, it would be considered for increasing score. It is necessary to clarify the notation and provide better explanations to ensure that readers can follow the content more easily.
For example, in Section 5.1, the notation $D_w$ is introduced for the first time on line 149, while the explanation for it appears much later, specifically on line 166.
Additionally, there seem to be errors in the equations. In Equation 1, the correct formulation of the MI estimator should be:
$E_{P_i}[-sp(-T)] - E_{P_i \times \tilde{P}_i}[sp(T)]$. (details are omitted here. Just refer the ‘-‘ sign)
(7th slide in [this lecture](http://people.ee.duke.edu/~lcarin/Jiachang3.20.2020.pdf))
This formulation intuitively maximizes MI, as the first term increases the similarity between local and global features obtained from the same graph, while the second term decreases the similarity between local and global features obtained from different graphs. Similar errors can be observed in Equation 3, where minimizing the given objective $\mathcal{L}_{F_i}$ seems to result in learning that decreases the similarity between local and global representations from the same graph, particularly in the first term. Moreover, in the second term of Equation 3, only the corrupted one $(\tilde{X}_i, \tilde{F}_i)$ is considered for expectation and $h_g^{F_i}$ is not covered, which, leads to ambiguous representation. Furthermore, in Section 6, $\sigma$ on lines 227, 233, and in Equation (6) refer to different functions, causing confusion.
Overall, it is crucial to address these issues related to notation, mathematical accuracy, and clarity throughout the paper to enhance its quality and ensure a better understanding for readers.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: In Section 6.5, the authors conducted ablation studies for filters, and it was observed that GPRGNN performed better in all cases. However, it would be informative to explore if there are any other options that demonstrate superior performance. This could involve examining alternative filter configurations or considering different methods altogether. Providing such insights would enhance the comprehensiveness of the analysis and strengthen the paper's contribution. Furthermore, analyzing the learned weight $\alpha$ or TSNE plots of learned representations with different filters would also be interesting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for dedicating time to thoroughly review our paper and for providing comprehensive feedback on our work. We appreciate your recognition of the strengths of our research. It is encouraging to note that you found value in our experiments across data with multiple characteristics, demonstrating the versatility of our approach in handling both homophilic and heterophilic graphs. We are also glad that you acknowledged the clarity of our contributions in the abstract and introduction sections, and our methods proposed for computational efficiency. We genuinely acknowledge the concerns you have raised regarding the presentation, inconsistencies in notation, and mathematical errors.
1. **Writing improvements:** We apologize for the notational inconsistencies. We thank the reviewer for bringing this to light. We have made the requested changes so that the paper is consistent in its notations. Specifically:
1. The size of the tilde has been changed to be consistent with the hats in line 147 and 149.
2. Notation $D_{w}$ is now correctly introduced before its use in line 149.
3. The MI estimator in equation 1 has now been corrected $\mathcal{I}^{\textrm{JSD}}\_{i, \theta, \omega}(\cdot, \cdot):= \mathbb{E}\_{\mathbb{P}\_{i}}[-\text{sp}(-T\_{\theta, \omega}(\cdot, \cdot)]- \\ \mathbb{E}\_{\mathbb{P}\_{i} \times \mathbb{\widetilde{P}}\_{i}}[\text{sp}(T\_{\theta, \omega}(\cdot, \cdot)]$
4. Equation 3 has been fixed: $\mathcal{L}\_{F\_i} = -\frac{1}{N + M}\left( \sum_{j=1}^{N}\mathbb{E}[\text{log}(D_{\omega}(h_{j}^{F\_{i}}, h_{g}^{F\_{i}}))] + \sum\_{j=1}^{M}\mathbb{E}[\text{log}(1 - D\_{\omega}(\widetilde{h\_{j}^{F\_{i}}}, h\_{g}^{F_{i}}))] \right)$
5. We have mitigated the confusion about the sigma variable, by using psi to represent PReLU and sigma to represent sigmoid
6. Additionally, we rewrote some parts to reduce the confusion
2. **Code Access:** We would like to point out that the anonymized link to the code is present in the supplementary material in line number 471, in section 8.1.
3. **Additional Analysis**: We appreciate your suggestion to broaden our exploration and consider other options as well. In addition to the BernNet filters reported in Section 6.5 of the paper, we report the performance of the Chebyshev filters used in ChebNet [A]. We report two configurations, 3 filters and 11 filters, similar to the analysis done with BernNet filters. As can be observed from the table, the GPRGNN filters outperform this too. It has been shown in previous works [B] that heterophilic datasets like Chameleon and Squirrel require filters that focus on the tail ends of the spectrum, leading to a dumbell/parabolic frequency response function. In the case of the GPRGNN filters the model is able to learn this response function easily, by giving a high weightage to the $A^{2}$ filter. Similarly, homophilic datasets require low-pass filters, which the GPRGNN filters can easily produce by giving higher weightage to the A and $A^3$ filters. However the coefficients required to create these shape using other filter banks such as BernNet and ChebNet, although possible, are not so simple and the model has a hard time learning them. We believe that this is the reason why GPRGNN filters give better performance on the datasets that we have considered. However, we would like to point out that there may be datasets where other frequency response shapes are required, and in these cases it is possible that BernNet and ChebNet filters are more suited to the problem. \\
Additionally, we report the alpha coefficients of the best performing split using GPRGNN filters, on Cora, Citeseer, Squirrel and Chameleon (The coefficients will be slightly different for different splits in the same dataset). The observations from this table support our hypothesis about the coefficients of the GPRGNN filters. Specifically, it can be observed that the homophilic datasests Cora and Citeseer are $A^{3}$ dominant, leading the creation of a low pass filters. In contrast, for the heterophilic datasets Chameleon and Squirrel, the $A^{2}$ filter is dominant, creating a parabolic filter response function. We will add TSNE plots for different filters and filter-banks to the final version of the paper.
* [A]: Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited. NeurIPS 2022.
* [B]: A Piece-wise Polynomial Filtering Approach for Graph Neural Networks, ECML PKDD 2022
---
Rebuttal Comment 1.1:
Title: Additional Analysis (Tables)
Comment: Please find below, the Tables from the "Additional Analysis" section of the rebuttal:
**Alternative Filter Banks Analysis**
| | Cora | Citeseer | Squirrel | Chameleon |
|--------------------------|--------------|--------------|--------------|---------------|
| $\mathbf{F}_{BernNet}^3$ | 85.13 (1.26) | 73.38 (1.81) | 37.07 (1.29) | 53.95 (2.78) |
| $\mathbf{F}_{BernNet}^{11}$ | 86.62 (1.59) | 73.97 (1.43) | 43.48 (3.80) | 62.13 (3.66) |
| $\mathbf{F}_{ChebNet}^3$ | 83.84 (1.36) | 71.92 (2.29) | 40.23 (1.58) | 60.61 (2.03) |
| $\mathbf{F}_{ChebNet}^{11}$ | 76.14 (6.80) | 59.89 (8.94) | 52.46 (1.10) | 67.37 (1.60) |
| $\mathbf{F}_{GPRGNN}$ | 87.00 (1.24) | 74.77 (2.00) | 52.23 (1.19) | 68.55 (1.87) |
**Alpha Coefficient Analysis**
| | I | $A$ | $A^{2}$ | $A^{3}$ |
|-----------|------|---|------|-------|
| Cora | 18.2 | 0 | 0 | 35.95 |
| Citeseer | 0 | 0 | 0 | 0.48 |
| Squirrel | 0 | 0 | 15.3 | 0 |
| Chameleon | 0 | 0 | 8.93 | 0.1 | | Summary: The paper proposes a contrastive learning model for learning node embeddings on a graph, with two technical innovations: First, the authors propose a new augmentation scheme during contrastive learning. Secondly, the authors re-map high dimensional embeddings into lower dimensional space using random Fourier features. The authors claim improvements attributed to both technical innovations on a variety of datasets.
Strengths: Presentation if the paper is mostly clear, figures are well readable, and results are clearly presented.
Weaknesses: - Although the word "augmentation" is very present in both the paper title and abstract (6 occurences), it only appears once in the paper. It is not fully clear how the abstract claims map onto the technique in the paper, beyond the preliminary info in Section 4.1. The terms should be used more consistently here. It is somewhat inferable from 4.1 what the "augmentations" are, but this needs to be clarified in Section 5.
- Presentation in section 5 is a bit unclear to me. The algorithm proposes two innovations (filter augmentations and random Fourier features), but the experiment section is not grouped accordingly. I would propose to split the content in 5.1. and 5.2. differently to aid the understanding of the different algorithmic components and their implementation.
- l. 183 claims "Maximizing JS is equiv. to reducing BCE" --- can you add a suitable reference for this claim?
- **Baselines**: I checked a few of the reported baselines numbers. For instance, [SUGRL (Table 1)](https://ojs.aaai.org/index.php/AAAI/article/view/20748) reports $83.5 \pm .5$ on Cora vs. $81.21 \pm 2.07$ here, $73.0 \pm 0.4$ on CiteSeer vs. $67.5 \pm 1.62$ here. Also some other baseline numbers seem to be off. Could you comment on possible discrepancies, and whether the code you are running for your experiment can reproduce the numbers reported in the literature? Some of the differences I mentioned above are similar in scale to the ones you use to claim the effectiveness of your method.
- From Table 3 (which I find convincing), it seems like RFF substantially improves many of the baseline methods. It is not really clear to me whether the reported gains are mostly due to the proposed augmentation, or mainly due to the RFF. A clean ablation experiement, ideally on all the datasets would help. Specifically, is the method SOTA also *without* RFF, i.e., due to the different representation learning approach? This is not addressed by the experiments currently.
- In general, the experimental results seem limited. See my clarification questions below.
**Minor:**
- The choice of colors in Table 1 is probably not well readable by readers with colorblindness.
- The word "significant" is used multiple times in the text, but no statistical tests were run. I propose to either add statistics to your tables (my recommendation), or drop/replace the term.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - l. 178: What is the corruption function?
- Eq. 3: I think the equation is off and misses two minus signs, once in front of the loss term, and one in front of the negative examples. I.e., shouldnt the general form be $ - \left(\sum \mathbb E \log D_\omega(.) + \sum \mathbb E \log (1 - D_\omega(.)) \right)$ ?
- What is the rationale for only running experiments on all datasets in Table 1, vs. for all the tables? Also, could you outline the choice of these particular datasets for your study? For instance, SUGRL also evaluates on Ogbn-arxiv, Ogbn-mag, Ogbn-products.
- Could you more clearly outline your contribution w.r.t. the representation learning part? The loss formulation as such seems to be established, still a substantial amount of space in section 5 is distributed to discussing the loss and its derivation. My impression is that mainly the composition and augmentation of the positive pair distinguishes this part from previous work. Could you confirm/discuss this further?
- Does the FIGURe model in Table 1 use RFF, or not? If it does not use RFF as suggested by the caption, then the result in Table 1 would contradict the result in Table 3. Could you clarify?
- Are there any assumptions underlying your proposed augmentation scheme with respect to the properties of the datasets? I.e., are the classification tasks in which a representation obtained using FIGURe can be expected to perform worse than other techniques due to the augmentation design?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for taking the time to review our paper. We truly appreciate your insightful feedback. We are glad to know that you found the presentation of our paper mostly clear, with readable figures and clearly presented results.
# Comment Response
1. **Explaining Augmentation:** We acknowledge the inconsistency in the usage of the term "augmentation" and will strive to enhance clarity by using the term consistently and providing a more detailed explanation in Section 5.
2. **Numerical Differences:** Refer to global response.
2. **Presentation/RFF Gains/Filter Gains:** In Table 2, FiGURe refers to the model with 512 dimensional embeddings without RFF, whereas FiGUR$e_{32}$ and FiGUR$e_{128}$ refer to models with lower dimensional embeddings that utilize RFF. No single baseline does consistently well across all heterophilic and homophilic datasets. Comparing FiGURe with other methods shows that using only filter augmentation is consistently either competitive or doing better than existing methods. Note that all the baseline methods are utilizing the same number of dimensions as FiGURe. FiGUR$e_{32}$ and FiGUR$e_{128}$ show the value of RFF to improve performance and be competitive even when learning lower dimensional embeddings. So the way to read the gains from the individual innovation parts is: 1] Filter augmentation provides consistent performance across all datasets which is comparable or better than all baselines and 2] RFF allows for efficient training via lower dimensions while providing similar gains as training with large dimensional embeddings. We will add more clarification and rearrange the tables to get the message across better.
3. **Max JS implies Min BCE Reference:** We realized that this statement is imprecise. Maximizing the JS MI estimator is a hard problem but it can be approximatelly optimized as a BCE loss following the works in [A,B,C]
4. **Choice of Colors:** Thank you for making us aware of this. We will fix this in the final version of the paper.
5. **Clarity of Corruption Function:** Refer to global response.
6. **Equation Discrepancy:** Refer to global response.
7. **Choice of Dataset:** SUGRL primarily focused on homophilic datasets, hence we selected a few datasets with varying levels of homophily [D], to demonstrate the capability of our method. Our ablation studies showed results on two homophilic and two heterophilic datasets, demonstrating some of the properties we want to showcase, with regards to our method. We also show results on ArXiv-Year and OGBN-ArXiv to show that we can scale up to 150k+ nodes. Please refer to Table - 8 in the supplementary material. We plan to conduct additional experiments on ogbn-mag/products which have 1M+ nodes.
8. **Contribution Summary:** One of the main contributions of this work is the notion of filter augmentations. Prior works have incorporated various filter banks in the supervised setting, and they jointly learn the filter specific representations and the combination coefficients. The novel insight that we are proposing in this paper is that we can separately learn these two parts and **bridge the gap with supervised methods**. This enabled us to propose a **simple extension to DGI** which easily incorporates these filter banks. However, augmenting more filters can increase the training cost. To save on compute, we also show that reducing the size of the learnt embeddings and then projecting them using RFF allows one to achieve similar performance with lower dimensional embeddings.
9. **Assumptions and threats to model:** Our model is based on previous filter-based supervised models that have been proposed in the literature. It has been shown in previous works [E] that heterophilic datasets like Chameleon and Squirrel require filters that focus on the tail ends of the spectrum, leading to a dumbell/parabolic frequency response function. In the case of the GPRGNN filters the model is able to learn this response function easily, by giving a high weightage to the $A^{2}$ filter. Similarly, homophilic datasets require low-pass filters, which the GPRGNN filters can easily produce by giving higher weightage to the A and/or $A^3$ filters. We show some of the learnt coefficients in the table below for the two homophilic and heterophilic datasets.
| | I | $A$ | $A^{2}$ | $A^{3}$ |
|-----------|------|---|------|-------|
| Cora | 18.2 | 0 | 0 | 35.95 |
| Citeseer | 0 | 0 | 0 | 0.48 |
| Squirrel | 0 | 0 | 15.3 | 0 |
| Chameleon | 0 | 0 | 8.93 | 0.1 |
However, the coefficients required to create these shapes using other filter banks such as BernNet and ChebNet, although possible, are not so simple and the model has a hard time learning them. We believe that this is the reason why GPRGNN filters give better performance on the datasets that we have considered. However, we would like to point out that there may be datasets where other frequency response shapes are required, and in these cases, it is possible that other filter banks like BernNet/ChebNet filters are more suited to the problem.
Additionally, we would like to point out that DGI allowed us to easily extend and incorporate filter banks, while other methods like MVGRL, GRACE and SUGRL, provide some non-trivial challenges to incorporate them. However, we do believe that it would be possible, and would like to include this as a part of future research.
* [A] Learning deep representations by mutual information estimation and maximization, ICLR 2019
* [B] Learning Independent Features with Adversarial Nets for Non-linear ICA, ICML 2017 Workshop on Implicit Models
* [C] f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NeurIPS 2016
* [D] Geom-GCN: Geometric Graph Convolutional Networks, ICLR 2020
* [E] A Piece-wise Polynomial Filtering Approach for Graph Neural Networks, ECML PKDD 2022
---
Rebuttal Comment 1.1:
Title: Re: Baselines
Comment: Thank you for the response. I'll reply to the other points in a separate comment, but wanted to get back regarding the baselines you mentioned in your global comment.
I understand the reason for choosing the protocol from [A, B, C], but still have trouble to reference the respective numbers. If possible, I would appreciate a table where you cite numbers from the respective papers alongside your reproduced numbers for an easy overview of how the literature vs. reproduced numbers add up.
For instance, this would be great to have for all SOTA methods in Table 1:
| | SQUIRREL | CHAMELEON | ROMAN-EMPIRE | MINESWEEPER | CORA | CITESEER | PUBMED |
| - | - | -| - | -| - | -| - |
|DeepWalk ([...], Table X) | | | | | | |
|DeepWalk (reproduced) | | | | | | |
|... | | | | | | |
|... | | | | | | |
where "reproduced" would be the numbers already in Table 1 in the submission --- this would help me to quickly trace back the sources. (I realize that in some settings there might be some mismatch). Thanks for considering.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
Please find the table below. As mentioned above (in the global response), we reuse the splits utilized in [A,B,C] which are different from the splits used in the original papers of the baselines we compare against [D,E,F,G]. However, as requested we are adding the numbers from [D,E,F,G], but please note they are not strictly comparable.
Note that in the leftmost column, the naming convention is as follows - Method Name (Reference, Table-X) (Tab-X denotes Table-X). For example: DGI (E, Tab-2) denotes Method DGI, Reference E, Table-2. The references are given at the bottom. We would like to clarify that for all the baseline methods, we ran the code provided by the authors of those works along with extensive hyperparameter tuning. Specifically, we separate the hyperparameter tuning of the unsupervised part and supervised part, and do not use any information from the supervised part while tuning the hyperparameters of the unsupervised part.
| | Squirrel | Chameleon | Roman-Empire | Minesweeper | ArXiv-Year | Cora | Citeseer | Pubmed | OGBN-ArXiv |
|------------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| | | | | | | | | | |
| Deepwalk (D, Tab-2) | | | | | | 67.2 | 43.2 | 65.3 | |
| Deepwalk (E, Tab-2) | | | | | | 70.7 ± 0.6 | 51.4 ± 0.5 | 74.3 ± 0.9 | |
| Deepwalk (F, Tab-1) | | | | | | 75.7 | 50.5 | 80.5 | |
| Deepwalk (G, Tab-1,2) | | | | | | 67.2 ± 0.2 | 43.2 ± 0.4 | 65.3 ± 0.5 | 63.6 ± 0.4 |
| Deepwalk (Reproduced, Tab-1) | 38.66 (1.44) | 53.42 (1.73) | 13.08 (0.59) | 79.96 (0.08) | | 83.64 (1.85) | 63.66 (3.36) | 80.85 (0.44) | |
| | | | | | | | | | |
| Node2Vec (F, Tab-1) | | | | | | 74.8 | 52.3 | 80.3 | |
| Node2Vec (Reproduced, Tab-1) | 42.60 (1.15) | 54.23 (2.30) | 12.12 (0.30) | 80.00 (0.00) | | 78.19 (1.14) | 57.45 (6.44) | 73.24 (0.59) | |
| | | | | | | | | | |
| DGI (D, Tab-2) | | | | | | 82.3 ± 0.6 | 71.8 ± 0.7 | 76.8 ± 0.6 | |
| DGI (E, Tab-2) | | | | | | 82.3 ± 0.6 | 71.8 ± 0.7 | 76.8 ± 0.6 | |
| DGI (F, Tab-1) | | | | | | 82.6 ± 0.4 | 68.8 ± 0.7 | 86.0 ± 0.1 | |
| DGI (G, Tab-1, 2) | | | | | | 82.3 ± 0.5 | 71.5 ± 0.4 | 79.4 ± 0.3 | 65.1 ± 0.4 |
| DGI (Reproduced, Tab-1) | 39.61 (1.81) | 59.28 (1.23) | 47.54 (0.76) | 82.51 (0.47) | 40.59 (0.09) | 84.57 (1.22) | 73.96 (1.61) | 86.57 (0.52) | 65.58 (0.00) |
| | | | | | | | | | |
| MVGRL (E, Tab-2) | | | | | | 86.8 ± 0.5 | 73.3 ± 0.5 | 80.1 ± 0.7 | |
| MVGRL (G, Tab-1,2) | | | | | | 82.9 ± 0.3 | 72.6 ± 0.4 | 80.1 ± 0.7 | 68.7 ± 0.4 |
| MVGRL (Reproduced, Tab-1) | 39.90 (1.39) | 54.61 (2.29) | 68.50 (0.38) | 85.60 (0.35) | OOM | 86.22 (1.30) | 75.02 (1.72) | 87.12 (0.35) | OOM |
| | | | | | | | | | |
Continued below. | Summary: The paper proposes a few approaches to improve the contrastive learning framework of the unsupervised graph representation learning (UGRL) problem. Building on top of the prior works in supervised GRL and UGRL areas (e.g., filter bank construction, etc.), the authors argued that 1) filter-based augmentations (essentially treating filter banks as "additional views" in contrastive learning schemes) is able to provide useful representations across cases that require high-freq and low-freq components; and 2) we can leverage a lot of techniques long known in the ML/DL community like RFF to efficiently reduce the computational complexity of these latent representations in UGRL.
Strengths: The paper proposed a relatively novel approach of leveraging different filter banks as additional views in a contrastive graph representation learning problem. The method basically builds upon the MI maximization scheme proposed in methods like DGI but extends the approach to also include filter bank $F=\{F_1, \dots, F_k\}$ such that each filter's encoder output's MI with the input data is maximized. The empirical analysis reveals that the approach is superior (and likely compatible with?) to the previous UGRL approaches that didn't use filter-bank-related augmentations; and that RFF provides a reasonable boost to low-dimensionality representation learning in graphs. Specifically:
1. The paper provides a reasonable empirical analysis of the approach on a diverse set of heterophilic and homophilic datasets, demonstrating improvements in both settings.
2. Numerous ablative studies were made, including on low and high dimensionalities, encoder weight sharing, efficiency, and filter bank selection.
3. The idea of including multiple filters in the contrastive learning framework itself makes sense to me and is a novel addition to the UGRL literature.
Weaknesses: A few weaknesses of the paper:
1. The main idea of the approach (maximizing mutual information of encoder output and input; applying discriminator to the patch-readout pair), which is described in Sec. 5.1, is still mainly built up on the discussion of the DGI paper. Although the inclusion of the filter bank discussion is new (which follows from other recent work like BernNet, GPRGNN, and ChebNet), this still limits the novelty of the approach itself.
2. The contribution of RFF projections, as shown in Sec. 6.2, is largely orthogonal to the FiGURe approach itself. I wonder whether it's the best idea to include it in this paper whose title is on FiGURe and filter augmentations. For example, the efficiency analysis in Sec. 6.4 clearly is lacking the comparison to DGI/MVGRL + RFF, and is thus unfair. Moreover, as the authors suggested in the supplementary materials, the RFF projection they used was based on a Gaussian kernel. Whether and how different projections (e.g., Laplacian) make a difference is unknown; the gradient instability of using RFFs in the architecture (which is an important problem in RFFs' usage in Transformers) is not discussed, etc. I personally feel there is a lot of value to the analysis of RFF itself.
3. I'm less certain whether "shared encoder weight" is a contribution (as listed on line 48) of this paper. It seems more like a remedy to the computational burden added by the FiGURe approach, but is not required otherwise?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. What is the effect of the number of encoder layers used in this case? The authors used one (as was in DGI), but there have also been works that used more.
2. Could the authors expand Table 5 to include RFF + {other baseline methods}?
3. Are the numbers provided in Table 1 and 2 the results of the authors' reproduction, or the original papers? Some of these numbers were generally different from what was reported in the original & other papers (e.g., MVGRL, GRACE).
4. Could the authors elaborate more on the corruption function $C$ (especially how it creates $\tilde{F}$)?
---------
UPDATE: I have read the rebuttals from the authors and appreciate the additional experiments & explanations. As the authors acknowledge, the benefit of RFF is not unique to FiGUREe (which some other reviewers also pointed out), and I think it should be elaborated on and compared to more clearly in the paper's revision (so is the novelty issue). I'd like to maintain the current rating, but I think the paper's writing can be improved more on these axis to avoid confusion.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Firstly, we would like to extend our sincere gratitude for your comprehensive and constructive feedback on our paper. We appreciate your recognition of the strengths in our work. Specifically, we are pleased that you found value in our novel approach of leveraging different filter banks as additional views in the contrastive graph representation learning problem. Your acknowledgment of the benefits of this method, including its empirical analysis across a diverse set of heterophilic and homophilic datasets, and its potential novelty addition to the literature is encouraging.
# Comment Response
1. **Novelty:** We acknowledge that our approach in Section 5.1 is largely built upon the foundational ideas from the DGI paper and that prior works have incorporated various filter banks. However, prior filter bank methods jointly learn the filter specific representations and the combination coefficients. The novel insight that we are proposing in this paper is that we can separately learn these two parts and **bridge the gap with supervised methods**. This enabled us to **propose a simple extension** to DGI which easily incorporates these filter banks. Extending other methods to use filter banks is non-trivial and we plan to explore these in our future work. We will make these aspects more clear in our paper.
2. **Inclusion of RFF:** Our goal was to ensure that augmenting these additional filter banks does not make the approach impractical by significantly increasing the training time. In Section 6.4, we wished to demonstrate that with RFF we are able to make the training time practical enough. Your feedback on the inclusion of RFF projections and its relation to the FiGURe approach is noted. We realize the potential for separate deep dives into the contributions of RFF and FiGURe and will consider refining our content accordingly. Also, here is a table with performance numbers with few different projections.
| Features | Cora | Citeseer | Squirrel | Chameleon |
|-----------------------------|--------------|--------------|--------------|--------------|
| Polynomial (d=2) [A] | 81.35 (2.11) | 69.81 (2.14) | 38.57 (1.56) | 53.88 (1.94) |
| Polynomial (d=10) [A] | 80.44 (1.56) | 68.71 (1.67) | 38.08 (1.10) | 55.20 (1.97) |
| Exp [A] | 80.42 (2.14) | 68.81 (1.34) | 38.06 (1.61) | 55.50 (2.11) |
| ANOVA [B] | 83.26 (0.78) | 70.09 (2.44) | 40.77 (1.46) | 56.01 (1.86) |
| RFF | 87.00 (1.24) | 74.77 (2.00) | 52.23 (1.19) | 68.55 (1.87) |
We will add more details about the training setup and results on all dataset to the paper as suggested. We will also add the discussion on the gradient instability of using RFFs in the architecture.
3. **Effect of Encoder Layers:** Refer to global response.
4. **Numerical Differences:** Refer to global response.
5. **Clarity of Corruption Function:** Refer to global response.
* [A] Random Feature Maps for Dot Product Kernels. AISTATS 2012.
* [B] Random Feature Maps for the Itemset Kernel. AAAI 2019. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their valuable feedback. Reviewer qpib's recognition of our novel approach leveraging filter banks is appreciated. Reviewer Ck44's positive remarks on clear presentation and readable figures are noted. Reviewer wrwF's acknowledgment of our approach's versatility and clarity in contributions is encouraging. Reviewer xu4A's appreciation of our paper's quality, consistent performance of FIGURe, and improvement over other methods highlights the strength of our work.
We have addressed specific concerns that the reviewers have raised as responses to the individual reviews. Here we address some of the common concerns that were raised by two or more reviewers.
1. **Equation discrepancy**: Equation 3 has been fixed: $\mathcal{L}\_{F\_i} = -\frac{1}{N + M}\left( \sum\_{j=1}^{N}\mathbb{E}[\text{log}(D\_{\omega}(h\_{j}^{F\_{i}}, h\_{g}^{F\_{i}}))] + \sum\_{j=1}^{M}\mathbb{E}[\text{log}(1 - D\_{\omega}(\widetilde{h\_{j}^{F\_{i}}}, h\_{g}^{F\_{i}}))] \right)$
2. **Numerical differences:** There are some differences in the numbers reported in the original papers of the baseline methods and our reproduction. The dataset splits used are different from the ones used in the original papers. Specifically, the splits we use have been used before in papers like [A,B,C]. The reason we chose these splits is two-fold. Firstly, there are ten splits provided in these papers (except for the large datasets), as opposed to the single split versions that the baseline methods are working with. We report all our results as averages over all these 10 splits. Secondly, there are certain inconsistencies in some of the splits being used in the baselines. For example, for cora, the dataset split used by [D] is different from the one used utilised in the original [E] paper, which is the split they use for reporting their baseline performance numbers. For more details on this, issue \#2 on the MVGRL GitHub repository can be referred to. For all the baselines, we ran the code provided by the authors of those works along with extensive hyperparameter tuning. Specifically, we separate the hyperparameter tuning of the unsupervised part and supervised part, and do not use any information from the supervised part while tuning the hyperparameters of the unsupervised part.
3. **Encoder layers:** We include an analysis with more encoder layers. There are two interpretations of 'encoder layers' in this case. First, a deeper GCN, which implies the aggregation of multiple hop neighborhood information into the node. Second, a single hop GCN with a deeper network to transform the features. We report the performance of both these cases, with two and three layers each. It can be observed that the single layer GCN performs equal or better than all other cases.
| Accuracies | Cora | Citeseer | Squirrel | Chameleon |
|-------------|------------------|---------------------|---------------------|----------------------|
| 1 Layer GCN | 87.00 (1.24) | 74.77 (2.00) | 52.23 (1.19) | 68.55 (1.87) |
| 2 Layer GCN | 86.62 (1.43) | 73.62 (1.46) | 43.80 (1.57) | 53.53 (2.13) |
| 3 Layer GCN | 84.40 (1.84) | 72.52 (2.09) | 42.79 (1.12) | 61.73 (2.25) |
| GCN + 2 Layer MLP | 85.73 (1.03) | 70.21 (2.30) | 49.91 (2.68) | 68.18 (1.76) |
| GCN + 3 Layer MLP | 84.99 (1.43) | 71.39 (2.32) | 45.85 (3.33) | 64.19 (1.43) |
4. **Clarity of Corruption Function**: The corruption function takes nodes and their associated features ($\mathbf{X}$) as input, along with a graph ($\mathbf{F}_{i}$) representing these nodes. The function maintains the structure of the graph, but it rearranges the rows of $\mathbf{X}$. This rearrangement effectively shuffles the node features, ensuring that each node's features differ from the original input data. When generating embeddings using a Graph Convolutional Network (GCN), the node embeddings are combined, resulting in a corrupted representation of the nodes.
* [A] Geom-GCN: Geometric Graph Convolutional Networks, ICLR 2020
* [B] A Piece-wise Polynomial Filtering Approach for Graph Neural Networks, ECML PKDD 2022
* [C] A critical look at the evaluation of GNNs under heterophily: are we really making progress? ICLR 2023
* [D] Contrastive Multi-View Representation Learning on Graphs. ICML 2020
* [E] Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Uncertainty Quantification over Graph with Conformalized Graph Neural Networks | Accept (spotlight) | Summary: Conformal Prediction (CP) outputs a prediction set that contains the true label with a certain likelihood given assumptions on exchangeability. It is a well-known and popular uncertainty quantification (UQ) technique. The authors propose a technique that unites GNNs and CP called conformalized GNN (CF-GNN). The technique is crucially permutation equivariant and topology aware.
Strengths: The paper tackles an interesting problem of unifying CP and GNNs. UQ is an important area of research and CP offers strong sound guarantees to tackle such a problem. The paper is well-written and I appreciated that the authors keep important considerations such as permutation equivariance, efficiency and topology in mind when developing their technique. The evaluation is extensive and shows good results.
Weaknesses: The technique focuses on a transductive setting only. While this is still a valid and interesting setting, the fact that the technique focuses on this setting only is a weakness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is there a reason for the focus on the transductive setting? Are there any clear challenges in inductive node-level or inductive graph-level tasks for instance. I believe inductive graph-level tasks to be an extremely interesting approach as doing UQ on molecular predictions may be a very important application. Another important application could also be link prediction.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors address the transductive focus as a limitation which is an important consideration. I don't foresee potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback and for acknowledging that our paper tackles an interesting problem, that it is well-written, and that the evaluation is extensive. The reviewer raises great questions and we respond to them below:
> Why focus only on transductive settings? Extension to inductive and graph-level tasks?
We thank the reviewer for this intriguing comment. The transductive setting is a widely useful evaluation setup for graph machine learning [1,2]. In Section 3, we show in theory that graph exchangeability holds for transductive settings, which enable the direct application of conformal prediction to graph-structured data. While other settings such as inductive learning are at least equally exciting, we start with the transductive setting to connect with the most standard conformal inference approach and establish the foundation for conformal prediction on graphs.
There are indeed nontrivial challenges for the inductive node-level setting: it may need a completely different setup, theoretical analysis, and methodology, which goes way beyond the scope of this paper. In the inductive setting, newly arrived testing nodes will be potentially connected to calibration nodes in the graph. Thus the non-conformity scores of calibration nodes are dependent on the ordering of testing nodes. In other words, different ordering of testing nodes that connect to the graph may imply that calibration nodes have different non-conformity scores. This violates the permutation invariance condition we laid out for graph exchangeability. We believe that the extension to the inductive setting needs significantly different techniques (the reviewer has pointed out that this is a very valuable future direction to pursue). A possibility is to extend the “beyond exchangeability” framework [3] by reweighting conformity scores by using certain predefined “similarity scores”. However, even the theory in [3] might not apply since it needs data-independent weights as input, and this precludes the use of network structures.
For the graph-level prediction task such as the prediction of molecule properties, each data point is a graph. Thus, there is no issue with graph exchangeability since there are no direct dependencies between calibration and testing graphs, as seen in the node-level prediction problem. In this problem, conventional conformal prediction methods can be directly applied.
We will expand the discussion on these points in the updated paper.
[1] Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. ICLR.
[2] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive representation learning on large graphs. NeurIPS.
[3] Barber, R. F., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2023). Conformal prediction beyond exchangeability. The Annals of Statistics, 51(2), 816-845.
---
Rebuttal Comment 1.1:
Comment: Thank you for your interesting response and for adding a discussion regarding this point in the paper. I think this is great work and addresses a very valuable problem. I have raised my score accordingly. | Summary: The authors propose Conformalized Graph Neural Networks (CF-GNNs), which extends conformal prediction to graphs for uncertainty quantification. The framework allows a GNN to produce confidence intervals for its predictions, based on an uncertainty estimation on a withhold calibration set. Under permutation invariance condition, they provide theoretical guarantee for the coverage of test-time confidence intervals. Furthermore, a learnable correction model is introduced to empirically produce more efficient prediction intervals. Experiments show that CF-GNNs acheive a better coverage rate over baselines and meanwhile have a more efficient (smaller) confidence interval than a naive conformal prediction.
Strengths: - [originality] the authors extend conformal prediction to node-level task and give theoretical coverage guarantee. As far as I know, this adaption and theorical results are new.
- [significance] this work makes solid theoretical and empirical contributions to uncertainty quantifications of graphs. The proposed method has a greater coverage rate and a smaller prediction intervals size.
Weaknesses: See Questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - At Line 276 the authors mention "As smaller coverage always leads to higher efficiency, for a fair comparison, we can only compare methods on efficiency that achieve the same coverage. Thus, we do not evaluate UQ baselines here since they do not produce exact coverage and are thus not comparable". Do you think there is a way to make some of the other UQ methods empirically reach $(1-\alpha)$ coverage so that we can get some senses of their size of prediction sets for comparison?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitation discussed. No potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful feedback and for acknowledging that our paper is original and makes solid theoretical and empirical contributions. The reviewer has raised great questions and we respond to them below:
> Ways to make baseline UQ methods empirically reach $1-\alpha$ coverage?
We thank the reviewers for raising this great question! We want to emphasize that, first, these UQ methods cannot reach nominal coverage either in theory or empirically. In contrast, what is unique about CF-GNN is that it rigorously achieves any pre-specified coverage rate. Second, even if it is possible to tune the nominal level of previous UQ methods to achieve $1-\alpha$ coverage and compare their efficiency with CF-GNN, in practice, it is impossible to know to which extent one should adjust the nominal level for those methods. On a side note, conformal methods could be used to calibrate the nominal level for these UQ methods to achieve valid coverage; however, it is beyond the original implementation of the UQ method and requires substantial effort in method development.
However, for the sake of evaluating the methods with access to gold labels, it is possible to reach $(1-\alpha)$ coverage by tuning the nominal coverage level of the previous methods and selecting the level when the empirical coverage reaches the target coverage. For our preliminary experiment, we used the Cora dataset with a target coverage of 95% and we studied Graph Attention Temperature Scaling (GATS) [NeurIPS 2022], which represents the most recent GNN-based UQ calibration method in our set of baselines. We conducted a grid search of nominal coverage from 90% to 100% with step size 0.1% and stopped when it reached the empirical target coverage. The stopping nominal coverage is adjusted to $1-\alpha$ = 92.2%, which is far away from the pre-specified target 95% coverage. The prediction set size becomes 1.82 which is still larger (less efficient) than CF-GNN, which has size 1.76. We will discuss this interesting point in the revised manuscript. | Summary: This paper presents a new approach, known as conformalized Graph Neural Networks (CF-GNN), designed to bring reliable uncertainty estimates to graph-structured data prediction models. The study's primary contribution is the innovative adaptation of conformal prediction (CP) to Graph Neural Networks (GNNs). The proposed CF-GNN model is capable of generating prediction sets or intervals that encapsulate the true label, with a level of coverage probability (90%) that can be predefined.
Strengths: The idea of extending conformal prediction to GNNs is quite interesting and timely. The authors establish a permutation invariance condition that justifies the application of CP on graph data and gives a precise outline of test-time coverage. The empirical evidence from numerous experiments validates the effectiveness of CF-GNN, demonstrating that it can meet any specified target marginal coverage while drastically reducing prediction set or interval size by up to 74% when compared to baseline models. The overall presentation is easy-to-follow, and the technical contribution is non-trivial.
Weaknesses: I don't see major flaws in this manuscript, but the introduction to how to construct correlation datasets can be more clear. In addition, the manuscript seems to miss a portion of works [1, 2, 3] on quantifying the uncertainty in GNN predictions. If the proposal of conformal prediction is to quantifying the uncertainty of GNN prediction, then other approaches on uncertainty quantification of node classification with GNNs should not be neglected.
[1] Stadler, Maximilian, et al. "Graph posterior network: Bayesian predictive uncertainty for node classification." Advances in Neural Information Processing Systems 34 (2021): 18033-18048.
[2] Zhao, Xujiang, et al. "Uncertainty aware semi-supervised learning on graph data." Advances in Neural Information Processing Systems 33 (2020): 12827-12836.
[3] Gao, Jiayi, et al. "Topology Uncertainty Modeling For Imbalanced Node Classification on Graphs." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to the Weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I am not seeing limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful feedback and for noting that our paper is interesting, timely, and easy to follow. The reviewer raises great questions and we respond to them below:
> Clarification on the construction of the correction dataset
We thank the reviewer for raising this issue, which is helpful in improving the clarity of our manuscript. We shall clarify in the revision. The correction dataset was randomly sampled from the calibration dataset with a pre-specified sample size. It plays a crucial role in our approach, enabling the calculation of inefficiency loss by simulating the downstream conformal steps.
> Inclusion of GNN-based uncertainty quantification methods.
We thank the reviewer for pointing us to these papers. We will discuss them in the expanded section on related works. Note that the methods in question are not directly comparable to our work since they produce an uncertainty score representing the model uncertainty per node, which aims at OOD/misclassification detection. As a result, these works do not construct uncertainty scores for all classes and do not produce prediction intervals/sets. In contrast, we aim to construct a prediction set/interval and require a score for every class. We also want to mention that conformal prediction could also be extended to these problems in non-graph settings [1,2] and we happen to be interested in extending it to graph OOD detections. However, this direction is out of the scope of this current work.
[1] Kaur, Ramneet, et al. "iDECODe: In-distribution equivariance for conformal out-of-distribution detection." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 7. 2022.
[2] Ishimtsev, Vladislav, et al. "Conformal $ k $-NN Anomaly Detector for Univariate Data Streams." Conformal and Probabilistic Prediction and Applications. PMLR, 2017. | Summary: This paper proposes a conformal prediction method tailored for graph-structured data. The proposed correction method is topology-aware and based on an empirical observation that inefficiencies correlate highly with network edges. The method updates node predictions based on its neighbors, and it is trainable alongside the GNN model. They also show how regular conformal prediction methods work under
Strengths: [1] The well-motivated problem, clean writing, and detailed related works.
[2] The first conformal prediction method for graph-structured data with exchangeability and validity assumptions is GNN agnostic and intuitive.
[3] Rigorous proof and method to show exchangeability and validity of conformal prediction on graph-structured data for the first time in the literature.
[4] Capable of achieving conditional coverage, which is a stronger version of marginal coverage.
[5] In-depth experiments and ablation studies show the efficacy and efficiency of the proposed method.
Weaknesses: [1] Inductive settings for GNN problems are more realistic compared to transductive settings. It is also not motivated why authors start with a transductive setting.
[2] There could be ablation studies/experiments over the conformity score functions, such as testing with RAPS[1].
[3] Even though the original coverage definition is used to measure inefficiency, for graph-structured data, there is an inherent non-IIDness in the data. Therefore each sample having equal weights in the coverage calculation is not appropriate. Weighting based on the degree of a node could be a great idea. Would love to discuss this part during the rebuttals.
[4] Why is there no experimental comparison with DAPS[2] (ICML23), which is also a conformal prediction method for node prediction?
[5] The paper's assumption of exchangeability is strict. What happens if exchangeability does not hold?
[1] Angelopoulos, Anastasios, et al. "Uncertainty sets for image classifiers using conformal prediction." arXiv preprint arXiv:2009.14193 (2020).
[2[ Zargarbashi, Soroush H., Simone Antonelli, and Aleksandar Bojchevski. "Conformal Prediction Sets for Graph Neural Networks." (2023).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (1) Why do authors not tackle with inductive node classification problem?
(2) What happens if permutation invariance does not hold?
(3) What is the reason for not using RAPS[1]?
(5) In lines 133-134, while employing conformal prediction. is it even possible to change the trained prediction?
(6) My intuitive understanding is that if inefficiency is correlated with the network edges, why degree does not affect the coverage? I feel that coverage for graph data should be redefined for graph-structured data. For example, each sample could be weighted based on the degree of the node.
(7) In what circumstances does exchangeability not hold?
[1] Angelopoulos, Anastasios, et al. "Uncertainty sets for image classifiers using conformal prediction." arXiv preprint arXiv:2009.14193 (2020).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have addressed limitations perfectly. As they also mentioned, this method is valid for transductive settings, but it will not be optimal for inductive settings. Authors also plan to extend it for inductive settings and link prediction tasks. Also, this method is heavily based on the exchangeability assumptions. It is unclear how to modify the algorithm because exchangeability assumptions do not hold.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful feedback and for noting that our paper tackles a well-motivated problem, that our method is rigorous and novel, and that our experiments are in-depth. Below, we address the excellent questions raised, with numbers corresponding to those in the review (e.g., [W1] refers to Weakness 1, [Q1] to Question 1, etc.) and we combine similar issues as needed.
> [W1, Q1] Why focus on the transductive setting? How about the inductive setting?
Thank you for this great comment. The transductive setting is a widely used evaluation setup for graph machine learning [1]. In Section 3, we show in theory that graph exchangeability holds for transductive settings, which enables the application of conformal prediction to graph-structured data. However, in the inductive setting, freshly arrived testing nodes will be potentially connected to calibration nodes in the graph. Thus the non-conformity scores of calibration nodes are dependent on the ordering of testing nodes. In other words, different orderings of testing nodes that connect to the graph yield calibration nodes with different non-conformity scores. This violates the permutation invariance condition we laid out for graph exchangeability. This setting would be far from common conformal prediction approaches. Thus, in order to establish the foundation for graph conformal prediction, we focus on the transductive setting in this work and leave the extension to the inductive setting for future work.
[1] Kipf and Welling. Semi-supervised classification with graph convolutional networks. ICLR 2016.
> [Q2] What happens when the permutation invariance condition does not hold?
Thank you for raising this question. Without any further assumptions, violating the permutation invariance condition may violate exchangeability and make conformal prediction invalid. On the other hand, permutation invariance is a sufficient, but not necessary, condition for exchangeability (which is critical for the validity of conformal prediction). There might exist scenarios where permutation invariance is violated but exchangeability still holds; we believe that investigation on such cases is an exciting direction that requires considerable additional efforts.
> [W5, Q7] When does exchangeability not hold and what will happen?
Thank you for this question. Exchangeability is a fundamental assumption for conformal prediction. In summary, there are two scenarios in the graph setting where exchangeability does not hold. One is the transductive setting with a non-random split. The second is the inductive setting.
If it is violated, then conformal prediction will generally be invalid. One direction is to extend to the “beyond exchangeability” framework [1] by reweighting conformity scores. But adapting this to non-exchangeable graph data remains a challenge since it does not allow for data-dependent weights (e.g. those based on network features). We believe that exploring graph conformal prediction beyond exchangeability is an important (and admittedly challenging) direction to pursue, which we leave for future research.
[1] Barber et al. Conformal prediction beyond exchangeability. The Annals of Statistics 2023.
> [W2, Q3] What is the result for non-conformity score RAPS?
Thanks for this suggestion. CF-GNN is agnostic to the choice of non-conformity score. We have picked CQR and APS since they are representative choices for regression and classification. Following the suggestion, we further experimented on RAPS and showed that CF-GNN can still obtain consistent improvement in efficiency reduction:
|Size|CP|CF-GNN|%Improvement|
|-|-|-|-|
|Cora|1.67±0.11|1.42±0.05|-15.35%|
|DBLP|1.39±0.02|1.32±0.01|-5.00%|
|CiteSeer|1.30±0.07|1.04±0.04|-19.85%|
> [W4] Comparison with DAPS.
Thank you for pointing out this interesting paper. We would like to first emphasize that the *DAPS is concurrent work, and that it was made public after we submitted our work to NeurIPS*. This is why we were not able to compare in our submission. We also note that DAPS employs diffusion to aggregate non-conformity scores, whereas CF-GNN introduces a learnable framework that optimizes efficiency through a specialized loss function. In fact, CF-GNN can be seen as a strict generalization of DAPS, becoming equivalent when employing a simple sum function over immediate neighbors. Our method's versatility also extends to regression tasks, highlighting further distinctions from DAPS. We’ll discuss it in the paper.
> [W3, Q6] Redefining degree-weighted coverage?
Thank you for raising this intriguing question. We first note that modifying the coverage definition with degrees would incur substantial modifications to the theory underlying graph conformal prediction. However, even without changing the definition, conditional coverage based on node degree can give insights into what we would get for your "weighted" coverage. The idea is that if the conditional coverage is good across all values of node degree, then degree-weighted coverage is also valid. As shown in Supplementary Table 9, with base CP, the worst-slice conditional coverage (WSC) conditioned on degree decreases to 0.79 when the target is 0.9, supporting the reviewer’s claim that marginal coverage may neglect variation in node degrees. In contrast, our CF-GNN has 0.89 WSC, implying satisfactory degree-based coverage. We hypothesize that it is due to network smoothing, where the prediction of low-degree nodes is modified by connections to high-degree nodes. More theoretical and empirical analysis will be interesting and left for future work.
> [Q5] Tradeoff between uncertainty quantification and predictive performance?
We thank the reviewer for bringing up this important question. Reviewer Wep3 has also expressed interest in this matter. Due to the constraints on the length of our response to individual reviewers, we direct the reviewer to our detailed answer provided in Reviewer Wep3's inquiries [Question 2]. | Rebuttal 1:
Rebuttal: > Summary of main points
We thank the reviewers for their valuable feedback and constructive suggestions for improvement. Overall, all five reviewers considered our work well-written and well-motivated, and all appreciated the theoretical rigor and strong empirical performance of our proposed method.
A few stimulating questions raised have motivated us to conduct additional numerical experiments, which strengthen the empirical evidence and the paper. In summary, we found the following properties through new experiments:
- The training cost of calibration GNN is manageable as it scales well to large graphs using a mini-batching strategy (Wep3);
- Our framework consistently improves the efficiency for other nonconformity scores beyond APS and CQR (M4K7);
- Our framework has minimal impact on model accuracy (Wep3, M4K7);
- Our method is more efficient than other UQ methods when the latter is manually tuned to a specific coverage level (y5h3).
We also added further discussion in response to the reviewers’ important suggestions for clarification, including
- the motivation of the transductive setting (M4K7, rkWb)
- the importance of permutation invariance and exchangeability (M4K7)
- the coverage after weighting based on node degrees (M4K7)
- the robustness to overfitting (Wep3)
- the compatibility with mini-batch training (Wep3)
- the tradeoff between prediction accuracy and uncertainty quantification (Wep3, M4K7)
- relation with related works on GNN-based uncertainty quantification (M4K7, XvAj)
- efficiency of baseline UQ methods (y5h3)
Please see our point-by-point response to all questions raised by each reviewer’s comments below. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the problem of providing faithful and "efficient" uncertainty estimates for GNNs. Here faithful means the unknown groundtruth is contained in the prediction set with a probability higher than a threshold; efficient means the prediction set should be as small as possible. Specifically, the proposed method is based on the conformal prediction method, which uses a separate calibration set to determine the threshold used to decide whether a class (for classification) is included in the prediction set.
The authors first show that conformal prediction can be applied to GNNs as long as the samples in the calibration and test sets are exchangeable. This further leads to the observation that the test time coverage of the predictions fluctuates a lot when the number of test samples is small. The authors term this problem as the "inefficiency" during conformal prediction.
To improve both faithfulness and efficiency of the uncertainty estimates, the authors propose to train a new GNN that uses the original GNN's predictions as input and adjusted predictions as output. Instead of employing the original predictive loss, the paper proposes a loss that is a differentiable proxy of the efficiency metric. This model is trained with a pseudo calibration and test set split from the original validation set.
Experiments show better uncertainty estimates compared to well-known approaches such as MC dropout, although the baselines are not designed specifically for GNNs.
Strengths: - The paper formally justifies that conformal prediction can be used under the exchangeability assumption.
- The proposed method is simple yet effective: directly optimizing for better uncertainty quantification on a potentially much smaller model.
Weaknesses: - It seems that the calibration GNN needs to be trained for every $\alpha$ separately if we want to have multiple calibration thresholds. It would be nice to show whether it is possible to train a model that adapts to multiple $\alpha$s.
- It would be useful to show the tradeoff between the predictive performance and the uncertainty quantification performance. For example, the change in top-1 accuracy of the calibrated and original prediction.
- Could the calibration GNN suffer from overfitting? In an extreme case, if $\mathcal{V}_{\mathrm{cor \_cal}}$ and $\mathcal{V}_{\mathrm{cor \_test}}$ are both small, it might be possible that the calibration GNN minimizes its loss by overfitting the labels and make all Vs trivially 0? Maybe this is one reason for only giving the calibration GNN the output of the base GNN?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - To evaluate $\hat{\eta}$, all samples from the calibration set are needed. If the graph is too large to be processed in a single batch, is it possible to train the calibration GNN in a mini-batch fashion? How would this affect the performance?
- It would be nice to show the training cost of the proposed method. I would expect training the calibration GNN could be much faster, but it is better to have more quantitative results.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I do not see potential negative societal impact of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback, and for recognizing that our approach is simple yet effective. We appreciate the thoughtful questions posed and address them in detail below:
> [1] Train a separate model for every $\alpha$?
The reviewer recognizes a vital part of our approach, where CF-GNN does require a separate model for each $\alpha$ because of the predefined nature of $\alpha$ in conformal prediction. We acknowledge that an extension to accommodate multiple $\alpha$s simultaneously could augment our method's flexibility; this is an area for future research.
In our calibration GNN, the chosen $\alpha$ plays a role in the loss function. A solution to include multiple $\alpha$s might be to aggregate the losses for these $\alpha$s. This will make the calibration GNN universal for multiple $\alpha$ values, but possibly suboptimal when compared to a model trained for a specific $\alpha$ value.
It's crucial to note that in most real-world scenarios, the miscoverage level $\alpha$ is often fixed (e.g., 5%), so training multiple models isn't typically a major concern, as only one or a limited number of $\alpha$ values are usually explored. This makes our CF-GNN approach practical. This aspect will be thoroughly discussed in our revised manuscript.
> [2] Tradeoff between uncertainty quantification and predictive performance?
Thank you for posing this insightful question. From our understanding, uncertainty quantification refers to the prediction set and its size, whereas predictive performance relates to the precision of the point estimate. We will explain based on this understanding, but please feel free to correct us if you meant something else.
For regression tasks using quantile regression, CF-GNN focuses on adjusting the $(\alpha/2, 1-\alpha/2)$ quantile band. Since the quantile bands are typically not used for point prediction, this adjustment does not influence the predictive performance of the model.
In classification tasks, the correction step within CF-GNN could indeed modify the top-1 class prediction (we view this as the “point prediction”). However, this correction typically does not result in a visible change in accuracy. We show this with newly conducted experiments below:
|Data|Before|After|
|-|-|-|
|Cora|0.844 ± 0.004|0.843 ± 0.016|
|DBLP |0.835 ± 0.001|0.832 ± 0.002|
|CiteSeer|0.913 ± 0.002|0.911 ± 0.002|
The efficiency of prediction sets and the accuracy of point predictions are typically distinct goals requiring different optimal models. Our method employs a second calibration GNN to separate these goals, allowing the base GNN for point prediction and CF-GNN for uncertainty quantification. In a new experiment on Cora, we find that 100% of the top-1 class from the base GNN are in CF-GNN’s prediction sets. The potential to develop steps that explicitly consider point prediction accuracy is an exciting avenue for future research, and this will be discussed in the revised manuscript.
> [3] Overfitting when the correction dataset is small? Possible to make all Vs trivially 0?
Thank you for raising this pertinent question!
**On the Issue of Trivially 0 Vs**: In classification tasks, the APS employs a cumulative sum of class probabilities over the softmax score up to the ground truth class; as the summation is one, it ensures that Vs will not be close to zero. In regression, a consistency loss is applied such that quantile bands are close to the original prediction thereby ensuring that Vs will not diminish to zero. Thus, our design safeguards against Vs becoming trivially 0.
**On the Risk of Overfitting**: Indeed, when the size of the correction set is exceedingly small, the $\hat{\eta}$ estimate may be less accurate or too variable, which potentially leads to a bias in the adjustment of prediction scores. However, our empirical studies in Figure 5 (2) show that CF-GNN's efficiency remains robust across different calibration set sizes:
|Holdout Calibration Set Fraction|CF-GNN|
|--|--|
|10% (36 nodes)|2.05±0.06|
|30% (109 nodes)|2.14±0.12|
|50% (182 nodes)|2.16±0.11|
In addition, we emphasize that this does not impact the validity of conformal prediction. With a separate conformal prediction procedure that follows the correction step, we always guarantee valid coverage (this only relies on the exchangeability condition we derive), even in the face of potential overfitting in the correction step, and thus acts as an additional protection layer.
> [4] What if the graph is large?
Thank you for this insightful comment! We can modify the mini-batching procedure when the graph is large as follows: in each step, in addition to the sampled training set batch, we also make separate predictions for the correction calibration/testing set. These predictions are used to estimate $\hat{\eta}$ and calculate the inefficiency loss. Since we set the correction set to be min(1000, (|D_calib| + |D_test|)/2), when the graph is large, the correction set is set to be 1000, which incurs little computational overhead.
Here we demonstrate its performance on the OGB-arXiv dataset which has 169,343 nodes and 1,166,243 edges. We observe it achieves consistent improvement over base CP on efficiency. We will discuss the large graph issue in the updated paper.
| Method | Size |
|-|-|
|CP| 8.79±0.19 |
|CF-GNN| 4.60±0.15|
> [5] What is the training cost?
Thanks for raising this. The calibration step scales similarly to standard GNN training, which is scalable with mini-batching and sampling. Also, CF-GNN has a small input node attribute size (# of classes for classification and 2 for regression), which is often much smaller than the node attributes in the original graph dataset (e.g. in OGB-arXiv, the node attribute size is 128). Using a single Nvidia 2080 Ti RTX 11GB and PyG, the time to train CF-GNN for the smallest graph Cora is ~3 minutes, and for the largest graph, OGB-arXiv is ~1 hour. We will discuss the scalability issue in the updated paper. | null | null | null | null | null | null |
A generative model of the hippocampal formation trained with theta driven local learning rules | Accept (poster) | Summary: The paper is an application of the learning scheme derived in Bredenberg et al. 2021 to representation learning and path integration in the MEC and HPC. By adapting the aforementioned scheme to continuous time, and by relating the proposed oscillatory 'gating' signal to theta oscillations that have been observed to gate dynamics in HPC and MEC, the authors propose to model representation learning in these two brain areas as a form of rapidly alternating 'wake-sleep' learning where brief periods of generative prediction dynamics alternate with more standard position-driven dynamics and deviations between path-integrated predictions and sensory stimulus-driven inputs are used to drive learning at both apical and basal synapses. Subsequently they demonstrate that their MEC model is able to capture two interesting features, learned exclusively through local synaptic plasticity rules:
1. The MEC is able to learn a ring attractor capable of path integration based only on motion cues that is able to accurately predict subsequent dynamics.
2. This attractor can easily generalize to a new environment without modifications to its recurrent synaptic weights.
Strengths: While previous models have successfully captured the features I previously mentioned through various optimization schemes, they have not shown how such learning could occur through local, biophysically-motivated synaptic plasticity which can be cleanly related to any representation learning objective function (in this case, the authors relate their learning to a variant of the ELBO objective). This is the principle success of the paper, and in my mind could prove to be a very important model for learning orchestrated between the MEC and HPC.
The authors also relax several constraints on the learning derived from Bredenberg et al. 2021:
1. They generalize learning dynamics to continuous time.
2. They include self-motion information in the 'generative' part of learning dynamics, which allows for path integration.
3. They relax constraints on gating of plasticity that are tied to theta cycles, simplifying learning and requiring fewer assumed variables that are locally available to synapses.
Furthermore, the paper is clearly written and logically presented.
Weaknesses: To me, the principal weaknesses of the paper come from additional analyses that the authors could have done, but did not do. I will list several below:
1. The authors do not test their learning algorithm in more complex environments. Simple two-dimensional environments such as boxes or circles could be used to test whether or not the learning algorithm is sufficient to develop grid-like cells in MEC. More complex image-based environments could be used to test the scalability of the proposed learning algorithm, and could potentially test whether or not the proposed learning algorithm is also sufficient to develop place cell responses (rather than having them hand-provided for the system). Given that algorithms similar to Wake-Sleep do not scale as well to high-dimensional problems as backpropagation and that the brain is presumably able to learn place cell information indirectly from multimodal sensory input (vision, audio, vestibular inputs, etc.), this is a valid concern.
2. The authors do not discuss testable predictions for their learning algorithm--I can identify several, which I would like to see discussed in more detail. In particular, though the authors suggest that after learning has converged the 'generative' and 'inference' phases of learning 'become indistinguishable,' this is both untested and unlikely to be true for nondeterministic systems and agents operating in partially observable environments with stochastic transitions. In particular, conditioned on the system being in the generative phase, stochasticity in the neural network and uncertainty in the environment should cause the precision of the path integration estimate to decrease throughout time (as the system advances deeper into the theta phase). In fact, the authors' results (Fig. 3e) appear to show exactly this phenomenon. In particular, this is closely related to results from Ujfalussy & Orbán, Elife, 2022, which use similar phenomena to support the notion that some form of sampling tied to theta phase oscillations is occurring in the hippocampus.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Could you elaborate on simplifying assumptions made about hippocampal circuitry? It seems to me that the hippocampus plays no role in inference here (it just relays sensory information as an identity function without any plasticity and plays no role in path integration), whereas in your model the 'unlearned' sensory layer that prevents representational collapse could have been an earlier stage of sensory processing like the thalamus, allowing basal synapses in the hippocampus to learn as has been observed in experimental data (e.g. BTSP).
What happens in this model during 'sleep'? I.e. if there are no vestibular cues provided to this network, does it still sample along its path integrator in a way that preserves spatial information? How does this relate to existing experimental information on sleep 'rehearsal' in the hippocampus?
What happens if more noise is introduced into dynamics of the network during both inference and learning? In particular, I'm concerned about the absence of an explicit gate on plasticity. While it is conceptually an improvement to remove gating of plasticity tied to the theta oscillation, in the model of Bredenberg et. al 2021, the update is not strictly zero for inference parameters during the inference phase--it is only zero in mean (the same is true for the generative phase). As noise grows in the neural output, it is possible for increased variability in these updates, which could longitudinally compound errors or cause some form of representational drift. Have you tested for this possibility? If not, it may be important to note that the gate is only unnecessary in near-deterministic noise regimes.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are no obvious negative societal impacts of the authors' work, and beyond the questions I've raised above, the authors do an adequate job of addressing the limitations of their work and explicitly relating their contribution to previous studies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review, we are glad that you find our contribution “very important” and “clearly written”. We respond to your key points below and including an additional simulation to answer one of your questions. Please kindly inform us if there's anything else required to boost your score.
> ...The authors do not test their learning algorithm in more complex environments. e.g. two-dimensional...[or]... image-based environments.
Please see general response section __2D environments__ for discussion and preliminary results.
Regarding image-based environments we share your suspicions that wake-sleep with no add-ons may not scale well. We made sure not to overstate our claims in the paper. In fact, the reason we chose to model hippocampus, not earlier sensory cortices, is precisely so we can ignore the complexities of these regions and focus on the core computations relating to structure learning in HPC. This is a similar approach to Sorcher et al. (2023), and Whittington et al. (2020). We acknowledge our proposal is only a partial solution and won't account for all learning within the brain.
> The authors don't discuss testable predictions...In particular...that after learning has converged the 'generative' and 'inference' phases of learning 'become indistinguishable,' [which is] unlikely to be true for nondeterministic systems.
Thanks you for suggesting this. We agree and will make this point clearer in the manuscript by removing the word 'indistinguishable'. Of course environmental or neural stochasticity will always cause the generative predictions to “drift” and prevent total convergence. An additional point we’ll add is that we speculate the presence of this drift within each sleep-phase could explain why fast neural rhythms (5 Hz) are used, as they prevent drift accumulating and damaging perception.
Regarding other testable predictions from our model, we list a few here:
* Decoupling HPC from sensory inputs shouldn't immediately damage path integration abilities since this comes from MEC.
* Decoupling HPC from MEC should damage path integration but not inference.
* The effect of inhibiting layers in the hippocampal formation optogenetically should depend strongly on which phase of theta we inhibit at.
* Synaptic learning rules in hippocampal and entorhinal populations may be fundamentally similar.
>Could you elaborate on simplifying assumptions made about hippocampal circuitry? It seems that the hippocampus plays no role in inference...
That's mostly correct although, as we demonstrate in Fig S2, the mapping from sensory to hippocampus doesn't have to be the identity . Also, hippocampus relays information _to_ MEC but also recieves information _from_ MEC and these synapses are all fully plastic. As you correctly identified HPC is not involved in the path integration mechanics which is precisely why the system can "transfer" this ability between environments once hippocampus has learnt to translate the new sensory inputs into the old MEC code. So its role is not trivial.
You're also correct that fixed sensory inputs prevent representational collapse in HPC but they wouldn't prevent collapse in MEC. Since MEC didn't collapse this suggests we could extend the model deeper to include a thalamus which drives HPC via learnable synapses. This is almost definitely closer to the what happens in the brain and would be an interesting future direction. For now we avoided this complexity to focus on our specific goals of answering questions about path integration and structure transfer.
> What happens in this model during 'sleep' if there are no vestibular cues...
In our model, if there were no vestibular (aka velocity) inputs this would be equivalent to path integrating a zero velocity so the bump attractor would stay still. Synaptic noise could, in theory, cause this bump to randomly drift and this drift would preserve spatial information, i.e. it would not jump randomly from place to place. In practice, however, the quasi-continuous nature of our attractor makes each location a very slight local minimum, stabilising the bump against drift. This "stickiness" during path integration was also observed by Vafidis et al. (2021, eLife) and may be a problem for agents attempting to perform path integration at very low velocities.
In terms of how this relates to hippocampal "rehearsal", relevant hypotheses include:
* Large quanities of noise overcome the quasi-continuous stickiness and drive drift during sleep/replay in the absence of vestibular inputs.
* Top-down processes drive "artificial" vestibular inputs which push the bump around the manifold. This hypothesis was put forward, but not tested, in an opinion piece by Sanders et al. (2015, Trends in Neuro).
* Feedback inhibition "destabilises" the bump causing it to move of its own accord (Chu et al. NeurIPS 2022)
Our model is well place to answer these questions but, with the time constraints of this rebuttal, we look forward to tackling it in future work.
> What happens if more noise is introduced into dynamics of the network during both inference and learning? In particular, I'm concerned about the absence of an explicit gate on plasticity...
This is an astute observation. One of the core philosophies we take is to avoid hand-crafting learning rules to include terms which may be non-trivial for biology to implement. Explicitly gating plasticity at sub-theta timescales is one such example. Instead we answer empirically, via simulation, whether this term in the theoretical learning rule strictly necessary?
We find that plasticity gating is _not_ necessary. To test this, and in response to your review, we ran the same path integration experiment but with __10 x more noise__. As shown in the attached pdf this results in noisier but still accurate path integration. Only once we increase noise to 100x does learning fails. Thank you for encouraging us to check this, we'll add this result into the supplement.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your detailed response. I believe that your response essentially confirms my initial score (7, conf. 4). Good work!
---
Reply to Comment 1.1.1:
Comment: Thank you again for the fair and thorough review | Summary: In this work, the authors give a continuous version of the 'impression learning' [1] and use the theta oscillation to modulate wake-sleep phase.
[1] Bredenberg, Colin, et al. "Impression learning: Online representation learning with synaptic plasticity." Advances in Neural Information Processing Systems 34 (2021): 11717-11729.
Strengths: The authors links the theta oscillations to $\theta_t$ in 'impression learning', which gives a neural implementation of the wake-sleep algorithm in Helmholtz machine.
Weaknesses: From a computational perspective, the contribution lacks novelty.
In Seciton 5.2, the authors clearify their relationship to 'impression learning'. They list four differences:
1. continuous version.
The discrete version is equivalent as long as the time steps are sufficiently small. And the authors did not provide the mathematical form of the Evidence Lower Bound (ELBO) for the continuous-time version.
2. $\theta_t$ can be ignored.
3. disregard $1-k_t$.
These are only technical improvments.
4. Link the theta oscillation (5Hz) to $\theta_t$
This is a special case of 'impression learning'. And the authors did not provide an explanation of how it is implemented from a computational standpoint. They merely established the connection without detailing the computational aspects.
Overall, compared to 'impression learning,' I do not believe this work has enough novelty from a computational perspective. Although it provides a biological explanation of 'impression learning,' I do not think this paper is suitable for the current venue. Perhaps journals like Nature Neuroscience/Communications, or eLife would be more appropriate.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The latent variable $z$ is not used in the current model, so just disregard it. Eq.(1) can be ignored. If you want to use $z$, there must be a decoder from the neural activity $r$ to $z$.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 1 poor
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you this review. We’d like to clarify our contributions in case they were misinterpreted. As the manuscript states; “The primary contribution of this paper is to introduce a biologically plausible model of sequence learning in the hippocampus which unifies its capacities as a generative model of sensory stimuli and path integration under one schema”. The intended goal was to shed light on brain function using insights from the ML and not – at least not primarily – to introduce an entirely new model of representation learning (although we make some minor contributions in this regard).
A criticism leveled within this review is that our model doesn't have "enough novelty from a computational perspective". We’d argue this critique slightly misses the point and is oversimplified. Nobody has proposed to model hippocampus as a hierarchical Helmholtz machine nor implemented such a model with learning rules, dynamics and architecture so closely matched to biology, nor shown such a model works. This is a novel computational model.
Perhaps the question comes down to scope rather than significance. We are grateful for your comment recognising this work as suitable for high ranking “journals like Nature Neuroscience/Comms”, however, we also believe that NeurIPS is _the_ preeminent venue for research at the interface of neuroscience and ML and that biologically plausible implementations of key ML algorithms lie well within its scope. A myriad of NeurIPS papers support this claim:
* Pogodin et al., NeurIPS 2023, Towards Biologically Plausible Convolutional Networks
* Greedy et al., NeurIPS 2022, Single-phase deep learning in cortico-cortical networks
* Liu et al., NeurIPS 2022, Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators
...being the most recent. See also:
* Koren et al., 2022, Pemberton et al., 2022, Lipshutz et al., 2020, Bahroun et al., 2021, Clopath et al. 2007, Hasselmo et al., 1994,
... all published in NeurIPS.
NeurIPS mission statement is to “bring together researchers in machine learning, neuroscience, statistics, …” with listed relevant topics including “Neuroscience and cognitive science (e.g., neural coding)”. Given the comments and ratings from the other three reviewers we are confident that this sentiment is still held within the community today.
In light of this discussion and the additional comments/changes made below we’d be grateful if you’d consider revising your rating and will be happy to attend to changes you consider necessary. Thank you again for the review.
### Point by point response
> The discrete [and] continuous versions are equivalent...
You're correct, shifting to continuous equations was not fundamental however it allows us to directly compare to real world temporal phenomena e.g. we can say how long in seconds, rather than how many "steps", the path integrator is accurate for.
> authors did not provide the mathematical form of ELBO in continuos-time
Although we agree this is a great suggestion it would be less trivial than it may seem. Unlike, say, Bredenberg et al., we did not derive our learning rules starting from a loss function hence, given the modifications we made, its not guaranteed one would even exist. We'llcarry on giving the discrete formula in the supplement.
> ...are only technical improvements.
These technical improvements are not, themselves, the core contributions of the paper though they may still be considered minor relaxations of past results.
These four point you raised were merely the changes we made to the impression learning architecture for experiment 1. Of course in later figures/experiments we made additional changes which are novel too. These include:
* The addition of self-motion inputs, which is crucial for path integration, to the latent hidden layer.
* A significant increase in the number of neurons per layer.
* Training the network on stimuli deriving from a complex and biologically relevant spatial exploration task.
* Continuous synaptic noise (see methods)
* ...
> no explanation of how it is implemented from a computational standpoint
We are confident all equations (including learning rates, initialisations, time constants and noise additions) have been detailed in the paper and supplement however if you inform us which equations are missing we’ll be happy to add them. Additionally, section “5.2 Relationship to online Bayesian Inference” connects to our model to impression learning.
> ...I do not believe this work has enough novelty...
To reiterate we consider the core novel contributions of our paper to be
* Creating an interpretable neural model for hippocampal function inspired by previous literature on Helmholtz machines.
* Successful training of this model in biological regimes to demonstrate it replicates numerous aspect of hippocampal function.
> ...this paper is more suitable for...journals like Nature Neuroscience/Communications...
Thank you, and we take this statement as implying the content of our work is good but that you feel the topic is out of scope for NeurIPS, in which case we respectfully disagree. Please see our above arguments.
> The latent variable z is not used in the current model, so just disregard it...
We had difficulty understanding this comment - if you could clarify we would be more than happy to address this. z is an important variable as it represents the underlying environmental latent so it cannot be discarded.
Thank you for this response. You’re right that not _all_ components of this model’s architecture are entirely novel. This need not mean it lacks computational novelty outright, nor was creating a new ML model ever the intended goal. Even for readers with no interest in biology, bioplausible implementations of ML algorithms are within scope as these lead the way toward novel implementations that better our understanding of how artificial intelligence relates to its biological counterparts.
---
Rebuttal Comment 1.1:
Comment: **I would like to reiterate and highlight my concerns. Please do not misinterpret my review.** I am not suggesting that neuroscience-related papers are inappropriate for NeurIPS. I also acknowledge that your paper has made some contributions in terms of experimentations. However, compare to 'Impression learning', the novelty of the current paper may not be sufficient for publication at NeurIPS. Assessing novelty indeed lacks absolute standards, but I also wish to quantitatively express my evaluation of the novelty in this paper.
The authors summarize the novelty of their paper into two points,
1. Creating an interpretable neural model for hippocampal function inspired by previous literature on Helmholtz machines.
2. Successful training of this model in biological regimes to demonstrate it replicates numerous aspect of hippocampal function.
However, 90% of the novelty in the first point comes from 'impression learning'. In fact, the first point should be accurately summarized as 'Making modifications to the impression learning model and using it to interpret the hippocampus.' All these modifications are purely technical, rather than computational. I believe that the remaining 10% of technical novelty is not substantial enough to support a significant argument.
Hence, the second point constitutes the main contribution of this paper. However, I perceive the second point as merely an engineering implementation. In other words, this paper falls under the category of an engineering-oriented work, lacking any computational insights for me. I believe that neuroscience articles published in NeurIPS should not lack computational novelty.
In simple terms, the computational novelty in this paper has largely been consumed by 'impression learning'. The remaining novelty might be better suited for publication in a journal, rather than emphasizing computational novelty in a NeurIPS context.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response, we really appreciate you clarifying your comments.
We strongly disagree. In our paper we generate a new model of the hippocampus by developing and expanding a link with an existing theoretical framework (generative models e.g. Helmholtz machines) - opening the way for a richer understanding of neural computations. We do this using only local learning rules and show the resulting model can learn biological functions such as path integration. We’re pretty sure this is novel and well within NeurIPS’ remit.
Specifically, we would like to push back on two points:
1. __Our modifications to the impression learning algorithm are significant.__ For example the addition of vestibular inputs to the hidden layer of a Helmholtz machine; this is entirely novel and absolutely non-trivial. It was _not_ done in the impression learning paper _nor_, to the best of our knowledge, elsewhere. These are what allow it the learn a tuned path integrator. Likewise the contributions listed above are non-trivial as they empirically prove that paired down bioligical-like learning rules have substantial learning power and can be theortically interpreted in the context of prior work (i.e. generative models). We are happy to do more to clarify the relationship with prior work, but we don’t understand the “10%” statement.
2. __Even without them, our model is conceptually insightful__. The fact that this algorithm, under the additional elements we have added, yields behavior seen in hippocampus (path integration, bump attractors and transfer learning etc.) is an important conceptual insight, not at all an obvious outcome, and one which we think you may have unfairly disregarded in your review. As we said before, this core contribution is a valuable, novel and well within scope for NeurIPS.
While our work __does__ make what we consider key methodological contributions, it is worth pointing out that even work that does not introduce novel methods but rather identifies the relationship between proposed methods and neural correlates provide valuable contributions to the field and have previously been considered within scope for NeurIPS. Here's two more examples of highly influential (> 100 citations) NeurIPS publications which applied unmodified ideas from ML to neural systems...
* __CNNs _with no novel computational modifications_ explains retinal responses to natural scenes__, McIntosh and Maheswaranathan et al. NeurIPS 2016
* __The successor representation with _with no novel computational modifications_ explains the behaviour of place cells and grid cells__, Stachenfeld et al. NeurIPS 2014
Again we reiterate that we __do believe we make substantial modifications__ but, even if you disagree, hope these serve as counter-examples for why such a hard line on NeurIPS submissions might impede progress. It may even be disadvantageous in the sense that it discourages reuse and consolidation of ideas. | Summary: This work presents a neat model that incorporates aspects of hippocampal function under one umbrella:
* first, the input from the environment (z) goes into the sensory layer (p) and activates the internal state (g) in a certain way, the model captures this an "inference" or "wake" stage of the training
* next, a set of recurrent connections in the "internal state layer" g simulates the prediction-making, guessing the next internal state, simulating the predictive mechanism in navigation (or potentially other cognitive functions)
* finally, a generative pathway goes the opposite way from g down to p and attempts to generate what the sensory input should look like
Training of the inference and the generative parts of the model is alternative with a 5 Hz square wave function, which reflects existing observations of the role of this oscillation in hippocampus.
The contribution of the paper is just to propose a model that describes the dynamics of simple dynamical processes, with potential extension to path integration.
------------- Update after the rebuttal period -------------
I would like to thank the authors for a very detailed rebuttal and being engaged in the discussion on both the technical and the ~philosophical levels.
My main criticism was based on the idea that just training a model that captures something is not enough, because there is an infinite number of models & architectures that will be able to do that. However, after the rebuttal period and skimming the paper again I came to think that the model presented in this paper achieves more that just capturing the dynamics by "any means necessary", but actually does so under heavy restrictions, and, while there are still multiple such models possible, the fact that the model still works under said restrictions makes this result interesting.
I am raising my score to "7: Accept" as this work, in my estimation achieves exactly the required level of "high impact on at least one sub-area" (computational modelling of hippocampal formation) and has "good-to-excellent evaluation, resources, reproducibility".
Strengths: This is a well-written paper, it has a good flow and is quite understandable. The proposed model combines various ideas about brain function in an elegant way and, while remaining simple, does manage to make those ideas to work in unison. The relevant work section gives great context for the work. The experimental work is well explained and documented.
Weaknesses: As someone who is not coming from the attractor perspective I fail to understand the significance of an attractor appearing. In my mind (please help me understand why this is not the case) the appearance on an attractor is exactly the goal and purpose of training. Basically, the way I see it, if we have a certain dynamical process, and we have successfully trained a neural network to capture this dynamics (the model can predict s(t+1) from s(t)) then saying "it formed an attractor" is equivalent to saying "the learning has converged". But isn't that precisely the purpose of training the model? The mere fact of it converging (and forming an attractor that captures the training data trajectory on a manifold) is, of course, a good thing -- the learning was successful, but is not in any way unexpected or remarkable. After all this is precisely what we wanted to happen -- we make an effort to build a model and the learning process that captures the data, and if done correctly, that's exactly what it is going to do.
In this work the fact that the generative part of the model had formed an attractor (aka "can generate the dynamical process correctly") is presented as a significant outcome. But from the machine learning perspective this is a trivial result, that IS what happens when you train something successfully and is not sufficient to imply a special biological significance of the model.
Put it this way: if I would make this model to have not 1 sensory and 1 hidden layer, but, let's say 3 sensory and 4 hidden layers -- it would, of course, also form some sort of an attractor (with a more complex underlying manifold and shape due to the fact that we have more layers), but that would not tell me anything about this model's biological plausibility.
I guess what I would like to discuss with the authors and other reviewers is whether creating A machine learning model has scientific value, or (as I posit) we need to make a step further and offer a model that would satisfy more than just capturing the dynamics, but also make predictions or coincide with biological restrictions that we did not explicitly encode, etc. Then -- yes -- we could say that not only this model captures the dynamics, but it also the only variant (or at least belongs to a small family of variants) that also do X, Y and Z the way the brain does it.
In the olden days of modeling proposing a set of differential equations that capture the dynamics was impressive because it was not a given than such a set of equations can ever be found. The outcome of such scientific endeavor could fail if the scientist was not able to describe the dynamics. So when a model was found nonetheless it made the contribution significant. But can we really apply the same criteria to ML-driven modeling? Because in the case of ML it is almost a given, that a "set of equations" (now represented by an artificial neural network) will be found to satisfy the data. Which tells us nothing more that "this data can be described" (which is rather noncathartical) and the is likely to be an infinite amount of such models that would do so (even the same architecture, trained again from a different initialization starting point would likely produce a "new" model).
I am looking forward to discussing this with other reviewers and the authors, to help me understand (1) am I wrong about the (in)significance of an attractor appearing and (2) whether it is too much and prohibitive for science to ask more from a model in order to count it as a significant contribution to the field than just it being able to capture the data, especially when this data is simulated?
For the authors, one way to make this critique constructive and actionable I guess would be to say more in the paper about why this particular model (it's architecture and other characteristic) is especially suited to described hippocampal dynamics and why all the same conditions would not be satisfied by any number of similar models.
Other points:
* It would be great to see some ablation experiments that would help understand that the observed experimental results are unique to the proposed model and would not emerge if some of the critical components of the model are turned off (and help assess their criticality).
* I also ask this as a question below, but I wonder why 2D environments were not explored experimentally? It would add so much to the work, make it relevant to more people and help build connections to existing work on hippocampal function, in particular place/grid cells.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Fig 1: What would be the appropriate intuition for the g layer? If z is "input" and p is "sensory", then should we think of g as an "internal state"?
95: Are there studies that support the idea of the theta rhythm acting as the direction switch? Are there recordings that somehow show the change in the direction of the flow of information?
147: Although in the Introduction (line 102) you mention that there is not discrete time assumption and the whole thing works continuously, I still can't help by wonder how many weight updates "fit" into one 5 Hz cycle for "wake" and "sleep" steps? Is this number driven by the computational power of the simulation hardware, cramming as many updates as possible, or there is a biologically-informed constrain that says something like "within one theta cycle synaptic connection can be updated only X times"? How does your implementation of the model handle this? Note: while the formulation is continuous, the act of updating is still a discrete step that takes place at some point, and this is what my question is about, basically how many times within one cycle does the line of the code that does w = w + update is being executed?
156: Is it correct to think of g->g recurrent connection as a step in the internal representation space where the brain tries to predict what the next (t+1) state will look like? And this internal representation is then used as a starting point for the generative part of the model to correctly generate the state of the environment z_t+1?
161: What is the "ground truth" for training w_p_A and w_g_A in the wake cycle? How do we know what what we should train toward do? I understand that the training is local and works to minimize the differences between dendritic (apical for the wake cycle, right?) and somatic activation, but how do we know that the dendritic activations represent the "true" ones and that bring the the system closer to them will result in successful learning?
161: Same question rephrased: what is considered to be correct output of the inference part of the model?
164: What is the ground truth of the generative training step? Let's say I run 1 pass from g -> p -> z, where does the learning signal come from? Do we compare the generated z_t+1 with the actual one to know whether the generative step has generated the correct stuff? What is considered the "correct" output of the generative part of the model and what this output is compared with in order to be able to say "yep, it has generated correctly"?
209: When you say that only the "wake" part of the model is activated, how do you conclude that is "correctly autoencodes"? I guess this question is still stemming from the ones above, but I fail to see what is considered the ground truth? Does it go all the way to z, or it is sufficient at this point to train w_g_B (as per Fig 1) to produce such a g_t, that after g->g step that will create g_t+1 this g_t+1 will correctly be reconstructed via w_p_A back into original p?
209: Another question here is when you "autoencode" do you autoencode p_t into p_t or do you "autoencode over time" so that p_t autoencodes into p_t+1? If not the latter, then I am confused when does the g->g step happen and what's the intuitive meaning of that step.
(Sorry for the avalanche of questions, I imagine these will be hard to parse, I tried to formulate them the best I could :) )
233: If the hippocampal "internal" part of the model is trained to autoencode g->p, then why would we expect sensory lesion to affect that in any way? You say it is remarkable that is keep generating, but isn't it exactly the behavior we would expect? I am trying to understand why the fact that g->p keep working is "remarkable" when we turn the z off. Since z is not part of that machinery, isn't it trivial that g->p will keep doing it's thing? Please let me know what I am missing and if the observation is not trivial as I postulate, then would would be the trivial behavior that we would expect but that is remarkably not occurring?
280: It would be interesting to confirm that indeed MEC has learned some basic property of path integrating by running an ablation experiment, where MEC is randomly initialized and for 30 minutes on the MEC <-> HPC is allowed to train. As with echo networks it might the case that learning MEC <-> HPC is always sufficient even is MEC internal state is random. In the ablation experiment you can check whether using pre-trained MEC is any different from using randomly initialized MEC (given the same approximate shape of the value distribution), would it result in longer re-training times, worse performance, etc or not.
301: Would anything change (in terms of the attractor, performance, etc) in instead of 5-10 Hz oscillations between wake and sleep you would use 1 Hz? 20 Hz? Is the 5 Hz crucial to the success of the model, and if yes - what would "break" is another rhythm would be implemented?
* What was the main reason for choosing very simple experimental environment for this work? A 2D navigation experiment would help show so much more and build empirical connection to place/grid cells.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: There is not discussion of the limitations in the paper, and while I personally do not consider it a must-have, in this case it would be useful to help understand how far-reaching the authors deem their clams and results and which aspects of it rely on strong assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for this incredibly thorough review. Your insightful comments have led to meaningful improvements. We respond point-by-point below but these first paragraphs are reserved to further your philosophical discussion about the goals of computational modeling.
### General response about the merits of computational modeling.
In ML, models are “trained” to minimize an objective function almost always with some type of gradient descent. In this case, users “put in” to their objective functions exactly what they want, and often manage, to get out. As you said, in the olden days this was merit-worthy but less so today with the powerful algorithms/architectures available. Even language comprehension is arguably a solved problem.
On the other hand, in computational neuroscience people often describe “learning” which, at a coarse level, involves more simple learning rules reminiscent of those ubiquitously observed across the brain. End-to-end optimization of objectives with backpropagation is considered implausible.
We are strongly in the "learning" regime. Obviously with the right objective function and optimizer we could have achieved the same results. A network trained this way would perform better but it would be sufficiently trivial that, without further justification, we wouldn’t have submitted it to NeurIPS. In our case we view the attractor and other results of learning as _emergent_ properties. As we’ll elaborate in the point-by-point it was not the only possible solution and, although as scientists we were guided by intuition to suspect this solution, it was in no way “built in” to the optimisation procedure.
Of course an objective function (sort of) exists – in our case we liken the system to a Helmholtz machine optimizing an ELBO-style loss – but that does not make the solution trivial since the learning rules, architecture and dynamics with which we (approximately) optimize this are heavily constrained by biology: No back prop, no long memory traces, no non-local information. The primary value of our paper is therefore demonstrating that the learning rules and architecture of HPC can be seen through the right mathematical lens as roughly the same as a well studied machine learning system (we don't think anyones done this before). To put it another way, we took a step towards discovering the objective function of HPC and (to us at least) this finding is profound; speaking to deeper organizational principles at play within the brain.
All said, we believe we are in firm agreement with you: the value is absolutely not in a model converging but in what this teach us about the brain. We believe the goal of modeling should be to (i) unify previously disparate concepts and (ii) make testable predictions. On both accounts we believe we are achieving this goal. Thank you again for opening the discussion on this important topic. If you agree we’d be happy to carry on the discussion below and if it’s made you reconsider the contribution of our paper we’d be grateful if you might revise your rating to reflect this.
We'll now respond point-by-point but the character limit means we won't fit all your questions in this box. We hope it is ok if we finish responding to your questions in a comment which we'll post after the rebuttal period closes.
### Point by point response (part 1/4)
> ...I fail to understand the significance of an attractor appearing....an attractor is exactly the goal and purpose of training....
Perhaps it would help to begin by clearly stating a question which, prior to our work, we do not believe there would have been consensus within the field about the answer to: _“Can unsupervised local learning in a system with one hidden layer receiving unstructured spatial inputs learn transferable path integration or is back propagation through time and/or deeper non-linear architectures a requirement?”_
State-of-the-art before us was that relevant HPC modelling almost always used non-local learning rules and powerful optimization algos which find path integration / attractors as, in your words, this was “precisely the purpose of training” (e.g. Sorscher et al. 2023 and Banino et al. 2018). It's unclear if this was because authors weren’t interested in biological plausibility or if its because it was _necessary_ for successful training.
To reiterate our earlier point we don’t just put in what we get out. We dont just do literal gradient descent on a hand-crafted objective function. The appeal of the attractor is exactly that it emerged from a system so heavily constrained by the realities of biology, lending weight to the suggestion that our model may contain some truths about the brain.
An observation from Fig 3c may help: notice a subset of neurons have converged on a trivial but perfectly valid solution of zero weights. They “died”. Their existence neatly demonstrates that a functioning attractor manifold over all/most hidden units is not _the_ solution but rather _a_ solution. We are the first to demonstrate this solution can emerge under the constraints used. Also, (see points further down) the system doesn't predict $s(t+1)$ from $s(t)$ but $s(t)$ from $s(t)$ making the future-predictive nature of the attractor more remarkable.
This is why we respectfully disagree with the statement that
> saying "it formed an attractor" is equivalent to saying "the learning has converged".
Sure, learning converged. But nothing a priori guaranteed an attractor would be the fixed point. In the remaining space we will add sentences to clarify our interpretation of this result and also reword instances where we have lent too far toward the interpretation that convergence alone is the key take home result, which it is not.
---
Rebuttal Comment 1.1:
Title: Point by point response (part 2/4)
Comment: > ...[a model should] make predictions or coincide with biological restrictions...
You’re absolutely right that dissimilar models can explain the same variance or data. The difference here is the strong correspondence between the elements we use to build the model and the known biology of hippocampus. This model does make unique testable predictions, here’s a couple:
* Decoupling HPC from sensory inputs should not immediately damage path integration abilities since this comes from MEC.
* Decoupling HPC from MEC should damage path integration but not inference.
* The effect of inhibiting layers in the hippocampal formation optogenetically should depend strongly on which phase of theta we inhibit at.
* Membrane potentials at basal and apical dedrites should converge over time.
* Synaptic learning rules in hippocampal and entorhinal populations may be fundamentally similar.
* ...
None, or not all, of these are made by other models published before now.
Unrelated to neuroscience, studying the brain’s implementation of ML algorithms can contribute beyond data-fitting by shedding light on how it can learn in a more data- and energy-efficient manner than most large-scale ML algorithms, potentially leading to improvements.
> ...Discuss whether (1) am I wrong about the (in)significance of an attractor appearing and (2) whether it is too prohibitive for science to ask more from a model than it being able to capture data
(1) Your point is well taken but we respectfully disagree. We think the appearance of an attractor is significant given the constraints on the system within which it emerged.
(2) Mostly we agree but perhaps propose that there is a sliding scale, for example people can now answer a qualified “yes” to the question we posed above and use this to move discussions forward. Our model makes predictions as listed above and in the manuscript. We are already looking towards testing this model on real data as part of ongoing work.
> ...one way to make this critique constructive and actionable...
We’ll highlight and extend sections where we have done this and add additional sentences directly contrasting our model to others for the camera ready. Thank you for this constructive suggestion.
> It would be great to see some ablation experiments...
Some ablation studies have already been performed to stress test the model
* Lesioning of sensory inputs (Fig 3).
* Relaxing the constraint that HPC receive unimodal spatial inputs (Fig. S2b)
* Relaxing the identity constraint on weights from MEC to the conjunctive units (Fig. S2c)
In the attached pdf you’ll find results for three new ablation studies, summarised in the general response section __Additional simulations__. These were performed in response to your review. Thank you for this suggestion which has led to a genuinely interesting set of new results. We will include these (and any others you feel are important) in the supplement.
> ...I wonder why 2D environments were not explored...
See general response __2D simulations__ for discussion and preliminary results.
> the appropriate intuition for the g layer
“Internal state/latent” would be a fair interpretation of $g$. We’ll clarify this in the paper.
> Are there studies that support the idea of the theta rhythm acting as the direction switch
Yes, some. Although recall from Fig. 1 that information flows bidirectionally _at all times_ in our model. The "direction switch" happens internally within the neurons and effects which direction information flows contiguously through the hierarchy. All synapses are still active at all times and this would make measuring the switch directly hard. Papers/results linking theta phase to the direction of information flow include:
* Analysis of theta phase precession plots shows they are bimodal indicating that the first and second half of the cycle are used for different computations (Yamaguchi, J.Neurophysiology, 2002)
* Hasslemo (2002 Neural Comput. and 2014 Neuroimage) showed that hippocampal dynamics are consistent which alternate phases of encoding ($\sim$“wake”) and retrieval ($\sim$“sleep”).
* Sanders et al. 2015 proposed (though didn’t computationally test) a similar idea that distinct phases of theta are separately used for inference and prediction.
* Ujfalussy & Orbán, Elife, 2022 (pointed out by another reviewer).
We’ll add the above citations which weren’t in the manuscript already.
> ...how many weight updates "fit" into one 5 Hz cycle for "wake" and "sleep" steps?...
The answer is $\frac{T_{\theta}}{dt} = \frac{200ms}{25ms} = 8$ but there is nothing fundamental about this number. The shift from discrete to continuous equations was also not fundamental (as you say, we discretise eventually at the level of simulation) but does allows us to directly compare the system and its dynamics to real world temporal phenomena e.g. we can measure how long the path integrator is accurate for in seconds not "steps" which feels useful.
---
Rebuttal Comment 1.2:
Title: Point by point response (part 3/4)
Comment: > [are] g->g recurrent connections [trying] to predict what the next (t+1) state will look like?
Yes. The $g$->$g$ weights update the internal representation from $g(t)$ to $g(t+dt)$ as given in Eqn. 3. A fuller version (Eqn. 10) reminds readers that Eqn. 3 merely approximates a continuous dynamical system in which case the $g$->$g$ weights are better understood as the parameters of the function determining the _rate_ at which $g(t)$ is changing.
> What is the "ground truth" for training $w_{p_A}$ and $w_{g_A}$ in the wake cycle
We don’t know! That’s partly why this is interesting. Each layer just “locally” (or "greedily") optimises its own synapses in a way which is not _necessarily_ optimal for the other layers.
As already mentioned another stable solution under these local dynamics is when $w_{g_A} = w_{g_B} = 0$. The entorhinal system could die/collapse which would certainly be undesirable for the hippocampal layer below it. A better solution is when the non-zero recurrent apical predictions in $g$ match the non-zero basal predictions (arriving from $p$) but, as you correctly identified, this system is not sufficiently constrained to determine the one and only final solution for $w_{g_A}$.
There is some weak cross-talk between layers. Concurrently, during wake, the sensory layer $p$ wants to match its apical (from $g$) to its basal (from $z$) inputs. If $g$ is a lousy latent representation the system can't achieve this and $w_{p_A}$ will continue learning. This can then indirectly affect $g$ since $w_{g_B}$ will be trained during the subsequent sleep cycle on inputs from $p$ which ultimately came through $w_{p_A}$. It is possible that this weak and indirect interaction between learning in $p$ and learning in $g$ is what allows a global solution to be found since both layers keep learning until stability (if this ever occurs) but nothing guarantees or enforces their cooperation.
Compare this to backpropagation where interlayer updates are connected via the chain rule and layers effectively “cooperate” towards a mutually optimal solution. Not so with local learning and the emergence of mutually compatible representations supported by calibrated attractor manifold is a non-trivial result.
> What is the ground truth of the generative training step?...What...output [allows me to say] "yep, it has generated correctly"?
Again, there is no “ground truth” in the way the term is used in supervised learning. There are only representations ($z$, $p$ and $g$) and mappings between those representations ($w_{p_A}, w_{p_B}, w_{g_A}, w_{g_B}$) which preserve information.
Here's a more simple answer (but please don't presume the learning rules are directly optimizing for this solution). In a linear system the “yep, it has generated correctly” moment would occur whenever $w_{g_B}$ and $w_{p_A}$ are pseudoinverses of one another. In our case there is a non-linearity to account for as well but the intuition holds.
To answer your question more fully, figure 1 panel c may help: during the generative “sleep” phase the weights which will be updated are $w_{g_B}$ which map the sensory layer $p$ to $g$ (through the dashed-green arrow). During this sleep period $p$ is receiving top-down drive from $g$ (through the thick blue arrow). Thus the generative step is entirely self supervised! During sleep MEC is decoupled from reality. It is dreaming and learns from these dreams. But - as is normal for wake-sleep algorithms - any learning which occurs during these dreams constrains learning during subsequent awake (see comment above) and so on so forth until (hopefully) convergence. We appreciate the oscillatory nature of this model be a bit mind-bending at times!
> ...how do you conclude that is "correctly autoencodes"?
We claim it “correctly autoencodes” the input if the sensory input $p_B$ is matched by the prediction $p_A$ (when the system is forced into a "wake" cycle. If the system were linear this would be satisfied if $w_{g_B}$ and $w_{p_A}$ were pseudoinverses of one another so the mapping from $p$->$g$ is undone by the mapping back from $g$->$p$. We will add to the manuscript the pseudoinverse intuition for autoencoding to clarify.
> ...do you autoencode p_t into p_t or do you "autoencode over time" so that p_t autoencodes into p_t+1...
It is the former. i.e. it’s same-step prediction ($p_B(t) = p_A(t)$). Sorry if the phrasing “over time” confused things, we will reword.
The $g$->$g$ step happens all the time but it’s only during sleep that it becomes important. The key is to remember the compartmental nature of the neurons. During the wake phase, sure, $g$->$g_A$ is _active_ but this then stops at the dendritic compartment $g_A$ and does not gate into the soma, hence cannot impact anything further down the hippocampal hierarchy. Only during sleep does $g$->$g_A$ becomes important as this then gates back into $g$ and down into the $p$.
---
Rebuttal Comment 1.3:
Title: Point by point response (part 4/4)
Comment: > Sorry for the avalanche of questions, I imagine these will be hard to parse...
So far so clear! We’re happy to answer them.
> If the hippocampal "internal" part of the model is trained to autoencode g->p, then why would we expect sensory lesion to affect that in any way?
You’re absolutely right and have not misunderstood. $z$ is not part of the generating machinery but, before lesion, acts as a calibrating signal returning the attractor to the correct place every theta cycle. Therefore, back to our main point, what is “remarkable” (although we will reword this for the camera-ready) is that an accurate path integrator emerged in the first place.
> It would be interesting to confirm that indeed MEC has learned some basic property of path integrating by running an ablation experiment.
We did exactly this experiment (and it failed, showing MEC learning is important). See general response section __additional experiments__.
> Would anything change if instead of 5-10 Hz oscillations between wake and sleep you would use 1 Hz? 20 Hz?
In a related study, Bredenberg et al. (NeurIPS 2021) have tested the impact of increasing or decreasing sleep-wake cycle durations. In summary it doesn’t matter what frequency you train in terms of how fast the learning will converge but the biggest effect comes when you analyze the online dynamics. This lower bounds the desired frequency: If the wake-sleep cycles are too slow then prediction accuracy during sleep can quickly “drift” off as the bump attractor inevitably decalibrates. This is disadvantageous for a system which wants to maintain accurate perceptions at all times during behaviour. There may be an upper bound to the frequency in terms of the temporal dynamics of synaptic plasticity or neuronal membrane timescales (neither of which are modelled in detail here) which require clean distinguishable sleep-wake cycles. One speculative conclusion is that 5-10 Hz is _just fast enough_ to enable accurate perception at all times but _just slow enough_ to satisfy the dynamic constraints.
> A 2D navigation experiment would help show so much more and build empirical connection to place/grid cells.
We are looking into this for future work, please see general response section __2D environments__ where we have some positive preliminary results.
__Thank you__ again for this excellent review which has prompted a lot thought. We hope we have satisfactorily answered all your questions and made the suggested additions to improve the manuscript. If it has made you reconsider our papers contribution we'd be grateful if you'd consider revising your score to reflect this or let us know what else might be required.
---
Rebuttal Comment 1.4:
Comment: Hi everyone, sorry for the delay, I will now work through the responses and comment as as I move along.
> the system doesn't predict s(t+1) from s(t) but s(t) from s(t) making the future-predictive nature of the attractor more remarkable.
Indeed, this is an important clarification! It does put significant brakes in my objection that you get out what you put in.
Another thing that I have not appreciated enough how local the update rule is. Will comment on this below.
---
Rebuttal Comment 1.5:
Comment: Ok, the responses do not thread under each box, so I might as well put all together into one.
Response part 3/4 made me realise that the locality of learning more crucial to everything and it is what makes many of the following observations non-trivial. To help deliver this to other readers (who, one might hope would read carefully, but you never know) -- perhaps it you could split 2.2 into two separate subsections, one that explains the local rules and how those are different from end-to-end, and the second about gating with theta oscillations?
Thank you for such a detailed rebuttal! Your responses and additional experiments have improved my understanding a lot, I will re-read and re-evaluate with this new appreciation for non-triviality of some of the outcomes.
---
Reply to Comment 1.5.1:
Comment: Thank you for responding, we're glad our rebuttal was clarifying. It's our job to make this clear to the reader and we agree that dividing section 2.2 will help clarify the results and explain their "non-triviality" in comparison to non-local learning so we're happy to make that change.
Once you've had a chance to rerate our work please let us know if there are any further questions so we can respond before the end of the rebuttal period!
---
Rebuttal 2:
Title: Follow-up
Comment: Hi, just wanted to make sure you didn’t miss this. Let us know if you have any additional questions, we'll be happy to answer. | Summary: The Hippocampus is postulated as a generative model to learn latent state representations and generate sensory predictions to solve spatial and nonspatial tasks. Theta-band oscillations are used to gate information flow into the generative model to modulate learning. A ring attractor develops within the generative model for path integration and flexibly transfer structures between environments. These computations are performed using biologically plausible learning rule.
Strengths: - Scheduling neural dynamics with theta oscillator: Model neurons are usually described as input summation followed by a nonlinear transformation. Here, the authors expanded the computational dynamics of a neuron by coupling the membrane potential with the theta oscillator (Eq. 4). This affords fine-grained control or better scheduling of the neurons and plasticity using a central gating mechanism.
- Biologically plausible learning rule: The model seems to be an extension of the MESH architecture which uses the Pseduoinverse learning rule. Here, the network is using Hebbian like learning algorithms to learn a ring attractor for path integration and can dynamically relearn a new environment even with some of the weights are fixed, adding to the list of biologically plausible models.
Weaknesses: - Not a completely novel architecture: Although the multi-compartment neuron architecture controlled by the theta oscillator is novel, past works have described similar neural architecture (Sharma et al., 2022; Han et al., 2020) of having two hierarchical layers recurrently connected. I am curious to know how these neural architectures correspond to the author’s proposed architecture and if there can be some overlap in learning performance?
- Insufficient analysis: It is not clear what computations the generative model performs using the ring attractor. The inference model seems to resemble place like receptive fields in Fig. 4b but so does the generative model? I anticipated grid like fields instead but this is not the case? It might also be interesting to show the synaptic weights ($Wgb$, $WpA$) evolve through learning and offer some insights to its computation, similar to Fig. 3c.
- Inclusion of network prediction prior to learning: Could the authors include the y axis information in Fig. 2b to show how much of the prediction error has been reduced through learning, and the activity of the inference (Fig. 2c) and generative (Fig. 2d) model prior to training to determine the difficulty of the task and the effectiveness of learning?
- Path integration capability: Fig 3e demonstrates that the mean decoding error increases monotonically beyond 20 cm by 5 seconds. Can this high error still be claimed to be accurate path integration, where prior to the lesion, error was almost 0? What else could be integrated into the model such that the error does not increase as fast as in Fig. 3e?
- Application to a 2D navigation task: Authors demonstrated the model’s application to a 1D navigation task (left and right) which is rather limiting. Learning to path integrate in a 2D environment will be much more convincing. Furthermore, if the authors could demonstrate the agent ability to perform vector-based navigation by estimating its location using path integration and recalling a goal location from a memory system (Kumar et al., 2023; Foster et al., 2000), this will be a complete contribution to the research in biologically plausible spatial navigation using the entorhinal-hippocampal model.
- Insufficient ablation studies: Ablation studies of the theta oscillator, multi-compartment neurons and individual layer plasticity will give us a better understanding of the roles played by each of these components for learning a ring attractor for path integration and remapping to a new environment.
References:
Han, D., Doya, K., & Tani, J. (2020). Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks. Neural Networks, 129, 149–162. https://doi.org/10.1016/j.neunet.2020.06.002
Sharma, S., Chandra, S., & Fiete, I. R. (2022). Content addressable memory without catastrophic forgetting by heteroassociation with a fixed scaffold. ICML. http://arxiv.org/abs/2202.00159
Kumar, M. G., Tan, C., Libedinsky, C., Yen, S., & Tan, A. Y.-Y. (2023). One-shot learning of paired associations by a reservoir computing model with Hebbian plasticity. ArXiv. http://arxiv.org/abs/2106.03580
Foster, D. J., Morris, R. G., & Dayan, P. (2000). A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus, 10(1), 1–16. https://doi.org/10.1002/(SICI)1098-1063(2000)10:1<1::AID-HIPO1>3.0.CO;2-1
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Would the plasticity rule still work if it was purely Hebbian plasticity i.e. $dW/dt = p(t)*g(t)$ without the additional normalizing term?
- The authors contrasted the wake-sleep algorithm to backpropagation. Could the authors compare the intuition of the wake-sleep algorithm to contrastive Hebbian learning? This might be a way to describe contrastive Hebbian learning using a biologically plausible alternative.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have sufficiently addressed their future directions. Model limitations and conditions at which the model breaks could have been further explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for this detailed review which has led to a number of changes to the paper. We have addressed your comments and questions below but please respond with any additional questions which we’re happy to answer. Apologies if our answers seem at all "curt", we are heavily constrained by the 6000 character limit but would like to cover all your points.
> Not a novel architecture....Sharma et al., Han et al.,...
We weren't familiar with the work of Sharma et al. but agree their architecture is similar, including its relationship to HPC-MEC. Two differences stand out: firstly we focus on generative modelling of the temporally varying environmental stimuli but they focus on content addressable memory of static stimuli. Additionally we use multicompartmental neurons which "multiplex" top-down and bottom-up signals and a biologically plausible learning rule. It is exciting to see these architectures applied to hierarchical RL as in Han et al. Comparing learning performance between these models would be quite non-trivial as all three networks perform substantially different tasks (generative modelling, memory and RL). We'll add these references.
Whilst there are numerous other examples of two layers recurrent architectures in ML we'd debate whether this is a "weakness". The combination of its features and results make our model novel even if the architecture alone is inspired by previous works. This allows us to transfer insight and intuition from these works directly our the neural system of interest, without reinventing the wheel. It is _because_ our model is related to existing generative models which allows for a deeper interpretation of hippocampal function. See general response section __Contributions__ for more on this point.
> I anticipated grid like fields...
Let us clarify: The MEC fields in fig 4b are a special case where we fixed HPC->MEC weights to the identity, hence why we see place-like fields in MEC (see line 520). In general our MEC representations are multimodal and grid-like (Fig S2a). Actually, they may be even more grid-like in larger environments when the attractor manifold would repeat itself due to the circular symmetry. We'll add a sentence to make this distinction clearer. Should the paper be accepted we'll be happy to add the temporal evolution of the weights as a figure to the supplement.
> Inclusion of network prediction prior to learning...
Thank you; we will make your proposed changes to the figures. Currently the y-axis in Fig 2b is log-base-10 meaning the $g$ prediction error decreases by 10x and $p$ by slightly less.
> Path integration capability...
This rate of error accumulation is only slightly worse that comparable models e.g. Sorcher et al. 2023, which reaches $\sim$10 cm error after 200 cm distance-travelled (=> 4 seconds at our agent's speed). Apples-to-apples comparisons are hard: we focus on small networks of $\sim$100 neurons and local learning rules which will always be outperformed by large networks (~4000 neurons) trained using backpropagation through time.
Steps to improve our path integrator (which we'll mention in the paper) include
* Gate some sensory data into soma during path integration, mirroring nature's lack of full sensory lesions (e.g. even in the absence of visual cues, here-unmodeled olfactory or tactile cues would persist).
* Increase the number of neurons.
> Application to a 2D navigation task...
See general response section __2D environment__ for discussion and preliminary results.
> ...vector-based navigation...
We agree vector navigation merits further study; being tied to grid cells and continuous attractors, akin to those within our network. Regrettably, time constraints prevented including these extensions in our manuscript as we focused on novel results instead of replicating existing ones.
> Insufficient ablation studies...
Some ablation studies have already been performed to stress test the model
* Lesioning of sensory inputs (Fig 3).
* Relaxing the constraint that HPC receive unimodal spatial inputs (Fig. S2b)
* Relaxing the identity constraint on weights from MEC to the conjunctive units (Fig. S2c)
In the attached pdf you’ll find results for three new ablation studies, summarised in the general response section __Additional simulations__. These were performed in response to your review. Thank you for this valuable suggestion which has led to a geuinely interesting set of new results. We will include these (and any other ablation studies you feel important) in the supplement.
> Would the plasticity rule work if it was purely Hebbian...
No. The second term can be viewed as a normaliser or equally as the weight-dependent term in the “target” equation $U-V(w)$ (see Urbanczik, Neuron, 2014) pulling the dendritic voltage towards the somatic voltage until they are equal. Without it there is nothing to stop the weights from continuing to change/grow once they reach their target value of $U$ and blowing up. We’ll include a sentence to illuminate this intuition in the paper.
> ...compare...to contrastive Hebbian learning?
We aren't experts on CHL but were interested to read your comment and agree there are a number of intriguing similarities including alternative target-driven and non-target driven phases and two-term Hebbian-like learning rules. Without a more detailed mathematical analysis, which is arguably out of scope for the current paper, it is hard to say anything more strongly than this. Instead, may we suggest that we can flag these similarities in the paper for others to take note of and look forward to investigating this in future work as we likewise suspect there may be a deeper connection between the two.
We hope our responses have addressed your inquiries and that the changes have enhanced the manuscript. If so, we'd be grateful if you'd consider revising your score to reflect this. If there are any specific actions we could take to further improve it, please let us know.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and running additional simulations. I still find the paper lacks certain clarity in motivating the novelty of the model architecture (why this architecture works even for different tasks using network analysis), justifying the seemingly high path integration error (Experiment with different network sizes could have been included to justify hypothesis). Athough, the abaltion studies are useful, I am inclined to keep the rating as it is (R6, C4).
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. Regarding clarity of the motivation and novelty please additionally consider our rebuttals to the other reviewers, particularly UjUg, where we describe numerous changes to the manuscript which will improve this aspect.
Furthermore, as we explained we'd argue the error accumulation rate isn't much higher than other much larger and less biologically plausible models and falls in the ball park of comparable models trained with local learning rules. We remain confident that increasing $N$ and training time would monotonically improve performance as we see no intuitive reason for this not to be the case. We would be happy to add these experiments to a revised version of the manuscript.
Please let us know whether any of your questions remain unanswered and whether these revisions might make you reconsider your rating.
---
Rebuttal 2:
Title: Follow-up
Comment: Hi, just wanted to make sure you didn’t miss this. Let us know if you have any additional questions, we'll be happy to answer. | Rebuttal 1:
Rebuttal: We thank all reviewers for their detailed and thoughtful comments and are glad they found it to be a "well written" and "logically presented" paper about a model which "could prove to be very important". We respond to each review individually but, for the benefit of all, here summarize three major aspects of our responses including additional experiments we have performed.
### Contributions
To reiterate our contribution: this paper substantially builds on an existing literature modelling the hippocampal formation's role in navigation (including path integration) and transferable structure learning. We take the novel approach of formulating it as a Helmholtz machine constructed from hierarchical layers of multi-compartmental neurons and show it can be "trained" with simple, local and biologically plausible Hebbian learning rules which can be approximately derived starting from an ELBO objective function.
Additionally, we show how the wake-sleep algorithm can be implemented by somatic gating between basal and apical inputs controlled by the hippocampal theta-oscillation. All in all our paper links deeply theoretical ideas about generative models (e.g. wake-sleep) to puzzling biological concepts (e.g. neural oscillations) in a manner we hope can seed fruitful discussion and progress.
We want to make clear that our intention to model HPC as a Helmholtz machine (as opposed to dreaming up or deriving an entirely new architecture) was a very concious choice. By not reinventing the wheel we can -- and hope the community will -- transfer insights from historical and recent ML research into generative models onto our understanding of the hippocampal formation. In doing so we avoid having to explain away lots of complexities with biologically implausible techniques such as back propagation as was done by others before us.
Our contribution should be of interest to both neuroscientists and ML'ers, for whom bioplausible implementations of key algorithms shed light on how the brain, and perhaps future AIs, can implement ML models in more data- and energy-efficient ways.
### Additional simulations
Reviewers suggested that additional ablation studies and experiments would strength our results. Here we summarise 3 additional experiments displayed in the attached pdf:
* __Removing plasticity from recurrent MEC synapses__. (Fig R1a) The result is that learning fails and path integration does not emerge indicating recurrent plasticity is crucial and that interlayer plasticity alone cannot bypass the need for a tuned attractor manifold. As suggested by reviewers PiE7 and UjUg.
* __Removing plasticity from HPC $\leftrightarrow$ MEC synapses__ (Fig R1b) results in HPC never achieving internal consistency between basal and apical inputs. Despite this the model can still path integrate. The interpretation here is that HPC simply relays sensory information into MEC but cannot then “translate” MEC predictions back to the sensory code. This would be a problem for a real behaving agent who could not then easily combine sensory or internal predictions and would need two decoders to predict position (one for when the hippocampus is in wake mode, and another for sleep). As suggested by reviewers PiE7 and UjUg.
* __Increasing the synaptic noise 10x__ (Fig R1c & d) confirms that learning path integration is robust up to substantial amounts of noise (Fig R1c & d). As suggested by reviewer Lhyx.
We thank the reviewers for these suggestions leading to interesting new results which we will include in the manuscript.
### 2D environments
Three reviewers mentioned that our results would be more convincing in 2D. To test this __we performed some preliminary experiments in 2D__, with the results shown in the attached pdf. Now the agent moves in a 2D environment and has four sets of conjunctive cells in charge of controlling north, south, east and westward motion. Just like in 1D, after learning we observe the local "centre-surround" nature of the learned recurrent MEC synaptic connections (a hallmark feature of bump attractors, Fig R2b left), and also that the synapses from the four sets of conjunctive cells are each skewed in four opposite directions (a hallmark feature of path integration, Fig R2b right). Fig R2c shows path integration where every 1 second sensory input is provided to recalibrate the 2D bump attractor. In between these instances the activity bump remains stable (albeit only briefly). This result indicates that path integration abilities are beginning to emerge in the 2D model. Note that, due to computational and time constraints, this model has not been tuned nor trained for the required amount of time and therefore it is likely we are not seeing optimal performance of the bump attractor.
To be clear, we do not intend these results as a comprehensive exploration of 2D and recognise they leave many questions unanswered. Unfortunately 2D takes substantially longer to simulate and tune due to the increased complexities of motion modelling and the much larger number of cells required. For these reasons, properly exploring 2D remains out of scope and we choose to focus only on 1D as the simplest setting where all core features of path integration can be tested comprehensively. We include them here as they give us some confidence that this approach would scale to 2 - and potentially higher - dimensions.
Pdf: /pdf/4df88cc6a5f21cd2d284c0136e4ef010041e8bdb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
PAC-Bayesian Spectrally-Normalized Bounds for Adversarially Robust Generalization | Accept (poster) | Summary: This work tries to develop a PAC-Bayesian spectrally-normalized robust generalization bound.
Strengths: This work tries to understand robustness from theoretical perspective.
Weaknesses: 1. unclear definitions:
second line in Eq. (4)
$\mathbf{x}(\mathbf{w})=\arg \inf_{\left\|\mathbf{x}-\mathbf{x}^{\prime}\right\| \leq \epsilon} g_{\mathbf{w}}(\mathbf{x})$ (I guess it's $\mathbf{x}(\mathbf{w})=\arg \inf_{\left\|\mathbf{x}-\mathbf{x}^{\prime}\right\| \leq \epsilon} g_{\mathbf{w}}(\mathbf{x'})$)
$\mathbf{x}(\mathbf{w}')=\arg \inf_{\left\|\mathbf{x}-\mathbf{x}^{\prime}\right\| \leq \epsilon} g_{\mathbf{w}'}(\mathbf{x})$
$\inf_{\left\|\mathbf{x}-\mathbf{x}^{\prime}\right\| \leq \epsilon} g_{\mathbf{w}}(\mathbf{x})$
etc
2. Assumptions are not clearly described before theorem 1:
What are the conditions on $B$ and $\gamma$ for your theorem?
E.g., Neyshabur et al. (2017b) assumes $\mathcal{X}_{B, n}=[ \mathbf{x} \in \mathbb{R}^n \quad | \quad ||\mathbf{x}||_2 \leq B ]$ and $\mathbb{P}_\mathbf{u} [\max_\mathbf{x} | f_\mathbf{w+u}(\mathbf{x})-f_\mathbf{w}(\mathbf{x}) |_\infty<\frac{\gamma}{4} ] \geq \frac{1}{2}$ (or $\ell_2$ norm).
3. Miss citation for Line 482.
4. What is $||W_i-W_i||$?
5. Please provide the details from Eq. (10) to Eq. (11).
6. As claimed in the contribution: **without any additional assumption**, **as tight as**, **how to obtain a robust generalization bound**.
My concerns are: **Does this bound provide new information for us?** In my opinion, for (middle or large) DNNs, PAC-Bayes is not a tight bound but may provide inspiration for us to get a better generalization model. For the bound of this work, it just replaces $B$ with $B+\epsilon$. To me, it simply implies that $||\mathbf{x}||_2\le B$ (clean data), $||\delta||_2\le \epsilon$ (attack radius) thus $||\mathbf{x}+\delta||_2\le B+\epsilon$ (adversarial data), but it holds no significant meaning.
For the above reasons, I think this work should be rejected.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: ref Weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: ref Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks Reviewer Ccn3 for the comments and questions.
___
**Q1, Q3 and Q4.** Unclear definition, what is $W_i-W_i$. Miss citation.
A: Thanks for pointing out the typo and missing citation. Some prime’ are missed due to the full/half width issue. We fixed the typo in the updated version. In $W_i-W_i’$, the second one should be $W_i’$. The second line in Eq. (4) is equal to the first line. Line 482 is the tail bound [1].
[1] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012.
___
**Q2:** What are the conditions on B and γ for your theorem?
A. Thanks for the question. B and γ are well-defined before the Theorem and it is the same as that in Neyshabur et al., 2017.
B is first defined in line 65. It is the magnitude of the training sample. We will also add $||x||_2\leq B$ for all training samples $x$ in line 135.
$\gamma$ is defined in line 147. It is the margin of the function value between true label and the max of false label.
It is important to note that $\gamma$ is also the margin in Neyshabur et al., 2017. There exists a misunderstanding that $P_{u}\left[\max_{x \in X} |f_{w+u}(x)-f_{w}(x)|_\infty <\frac{\gamma}{4} \right]\geq \frac{1}{2}$ is not the assumption of $\gamma$. This inequality is the condition for Lemma 4, and it only serves as a bridge to prove the generalization bound.
___
**Q5.** Please provide the details from Eq. (10) to Eq. (11).
A. Thanks for the question. The first inequality is Lemma 6.2 (Eq. (9)). The second inequality is the tail bound (line 482).
Q1-5 is about the typos or further detials. We will fixed it in the updated version. Below we answer your main question.
___
**Q6:** Does this bound provide new information for us?
**A:** Our main result offers **valuable and new information** into adversarially robust generalization. It is discussed in Section 5. We understand the main concern or confusion come from the fact that we did not emphasize it is new. We will make necessary revision to distinguish old and new information.
The answer is provided above in common question 2. We hope it adequately addresses your concerns.
We hope you also review our response to "common question 1" for additional context. Understanding the historical background of the theory of norm-based complexity is crucial in comprehending the significance of our findings. Further details will be provided below. Following your thought, we decomposed the question into two questions:
1) the significant meaning of the bounds and
2) the inspiration to get better model.
___
**Q6.1.** the **significant meaning** of the bounds.
A. replacing B by B + ε has its natural meaning.
In standard generalization bound: standard generalization ≤ B × spectral complexity means that **standard generalization** is related to **clean samples** and spectral norm (of DNNs).
In robust generalization bound: robust generalization ≤ (B + ε) × spectral complexity means that **adversarially robust generalization** is related to **adversarial examples** and spectral norm (of DNNs).
Therefore, replacing B by B + ε has a natural meaning: it replaced clean samples by adversarial examples from standard to robust generalization.
The important message of the bounds lies in the spectral complexity, rather than the training samples.
___
**Q6.2.** The **inspiration** to provide better model.
As you mentioned, the bound may provide inspiration for us to get a better generalization model. Let us discuss about the inspiration.
**Firstly, why standard generalization bound is important?**
It is proved that **standard generalization ≤ B × spectral complexity**. While the term "B" may not be directly controllable, the spectral complexity is within our control. Therefore, focusing on the controllable factor of spectral complexity presents a potential avenue to enhance generalization performance.
**Secondly, why robust generalization bound is important?**
It lies in the widely observed phenomenon where deep neural networks (DNNs) exhibit strong standard generalization but poor robust generalization. It is important to see what factors contribute to this difference. Then, the mismatch factors between standard and robust bound provide rich information to understand the disparity between standard and robust generalization.
**Previous result** showed that **robust generalization≤ (B + ε) × spectral complexity and other factors**. $B+\epsilon$ (the adversarial example) is not controllable but the other factors are controllable. Therefore, spectral complexity and other factors (width, gradient, addition assumptions) are all potential factors to improve robustness.
However, it is hard to verify which factors are more important, since we don’t know whether these bounds are tight in terms of additional factors, and whether these factors can be further reduced. Therefore, providing a better bound is an important problem in learning theory. Our paper give an affirmative answer.
**We prove that robust generalization ≤ (B + ε) × spectral complexity.**
Therefore, we dismiss the possibility of other factors (width, gradient, assumptions). At least from theory perspective, these other factors are solely comes from mathematic issue. It inspires engineers to pay more attention to spectral complexity as a potential factor to improve robustness. It is crucial to emphasize that the inspiration discussed is solely derived from our result.
We hope our explanation can help understand why our result is important from an inspiration perspective.
___
Overall, we believe our work provide a fundamental result to the learning theory comunity. Currently, the score "Soundness: 1 poor" and "Contribution: 1 poor" is very unfair to our work. We hope our answer can address your concern. We hope you can reconsider the significance of our work. If you have any further inquiries, please feel free to ask.
---
Rebuttal Comment 1.1:
Title: response
Comment: I thank authors for their careful reply.
As the typos in the equation (definition) destroyed my patience to carefully review this paper, I only spent an hour checking most mathematical parts.
I have to say, the current version is very unfriendly to a general reader in the community.
All in all, I think the current manuscript has its merits but also obvious flaws, I sit on the fence for this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thank you very much for the response.
1) About the typos.
Thanks for pointing out the typos about 1) missing prime in f(x) and Wi and 2) missing "=" in Equation (4). We have carefully proofread the paper to fixed other potential typos.
2) the current version is very unfriendly to a general reader.
Thanks for pointing it out. Now, we have updated the paper based on the comments and questions for general readers. | Summary: This paper improves previous PAC-Bayesian bounds on robust generalization. The previous bound in Farnia et al. (2018) has a term that is not bounded, and this work provides a bound to that term using the Lipschitzness of feed-forward ReLU networks. The basic idea is that coordinate-wise Lipschitzness preserves under L-inf perturbation.
Strengths: I am quite familiar with this field and spent 3 hours reviewing this paper. Though I do not closely follow the robust generalization line of research, I believe that this work could be helpful to people working in the same direction. The overall framework is very clear and the results are intuitive and easy to understand. Though there are some confusing parts in some sections and the writing can still be improved, overall this looks like a nice paper and should make it to NeurIPS.
The overall framework is clean, and the proofs are easy to read. I quickly checked all the proofs, and they look good to me. I am not 100% sure that all proofs are sound, but even if there are small errors, they should be fixable because the results are very intuitive.
Weaknesses: My only concern is that the significance of this work might not be obvious to a person who is not very familiar with this field. This work is addressing a very specific issue in a previous theorem, and while this issue is important, I think the authors should clarify more about why it is important, what is the main challenge and how this paper fixes it, right at the beginning of the paper. Right now I would say that those things are quite scattered. For example, the main challenge is in Section 6.2. I think the authors could include the following in the intro:
- What is the main issue in the previous PAC-Bayes bounds?
- Why is it difficult to fix this issue?
- What additional assumptions does this work make in order to fix this issue?
- What is the main technical contribution in the proof of this work?
As someone very familiar with learning theory and adversarial robustness, I can find the answers to these questions easily in the paper. However, if the authors want to appeal this work to a more general audience, I suggest them rearrange the paper a little bit, and discuss these questions in the intro.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See above.
**Post rebuttal note:** I have read the rebuttal and other reviews. I prefer to keep my score.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks Reviewer FsEL for the comments and questions.
**Comment 1.** My only concern is that the significance of this work might not be obvious to a person who is not very familiar with this field.
A. Thanks for the suggestions. We understand that the significance of this work might not be obvious to general readers. We will clarify
1) Why the targeted problem is an important unsolved problem? And
2) Why our result is significantly important?
in Intro. Such discussion is also provided in our answer to common question 1.
**Detailed suggestion.** I think the authors could include the following in the intro.
A. Thanks for the suggestions. Some of them (2,3) are already provided in intro. The others (1,4) are discussed but not emphasized. We provided our detailed answer and detailed modification below.
**S1.** What is the main issue in the previous PAC-Bayes bounds?
This in discussed in line 55-56. We will add: “The inclusion of these supplementary assumptions and an additional term is less than ideal. These adjustments were made as a compromise to address mathematical complexities."
**S2.** Why is it difficult to fix this issue?
This is discussed line 48-51.
**S3.** What additional assumptions does this work make in order to fix this issue?
This is discussed in line 58.
**S4.** What is the main technical contribution in the proof of this work?
The technical contribution is detailed in the paragraph titled "Technical Contribution" on line 73. The current exposition is presented at a high level; we will now provide a more specific breakdown to the updated version:
Our approach to solving this problem involves two main aspects:
1) We introduce a crucial inequality to address this problem, which is the preservation of weight perturbation bound under lp attack.
2) We restructure the proof by Neyshabur et al., in terms of the margin operator. This modification enables the application of the aforementioned inequality.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal. I have read the rebuttal and will keep my rating.
---
Reply to Comment 1.1.1:
Title: Thanks for the response.
Comment: Thanks for the response. We have updated the paper based on the comments and questions. | Summary: In this work, authors use PAC-Bayesian bound to characterize the generalization gap of adversarial robustness. Their work is mostly based on the bound derived from (Neyshabur et al., 2017b) so the resulting bound is valid for a deterministic model.
Strengths: The major contribution from this work is the new PAC-Bayesian bound for adversarial robustness. The bound works for both Lp and non-Lp cases on both feed-forward networks and ResNets. Authors also compare their bounds with existing ones that only target specific attacks, and show their bounds are more generic. I appreciate the paper’s contribution of the bound (but I do have a question regarding the tightness of the bound which I will elaborate in the next part). The paper has pushed the use of PAC-Bayesian theories to scenarios beyond standard generalization.
Weaknesses: The first weakness of the paper is probably that this newly derived bound largely relies on the work from Neyshabur et al., which has been phrased as an advantage of Theorem 1 being tight at least as Neyshabur et al.’s. However, this might be misleading as I believe the correct description would be when $\epsilon=0$ Theorem 1 reduces to Theorem 2 so they are as tight as each other. I am not sure I understand what does it mean for Theorem 1 and 2 to be equally tight when $\epsilon > 0$? Do you mean that what is in the big O notation is around the same magnitude? However, I do not see the value of $p$ (in $\ell_p$) plays any role in the new bound and the paper talks about the general $\ell_p$ robustness, thus I assume the bound works in any $\ell_p$ space. I hereby have the following question, if the bound is p–norm-agnostic, how come it is equally tight for all $p$? Does this bound simply characterize the robustness of the model in the largest ($\ell_\infty$) perturbation ball for a given $\epsilon$? If that is the case, perhaps the derived bound is pretty loose for $\ell_2$ perturbations. Can you elaborate more here (and in the paper) about the tightness of the bound for different $p$.
Another weakness is the empirical study to demonstrate the tightness of the bound in Table 1. This table is poorly captioned because I do not know how these models are trained, what architectures are used, what $\epsilon$ and attack techniques are used to report the numbers. Also, what does $\infty$ means for Theorem 3? Does this mean the assumption about gradient norm in Theorem 3 does not hold? Do you have experiments about the gradient norms? Moreover, I think it might be useful to use more models and more statistically significant numbers to show the proposed bound is tighter, like the measurement done in this paper [1].
[1] Jiang, Y., Neyshabur, B., Mobahi, H., Krishnan, D., & Bengio, S. (2019). Fantastic Generalization Measures and Where to Find Them. ArXiv, abs/1912.02178.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I do not have further questions. I think the theoretical contribution is incremental so it may further increase my score if more empirical evaluations are conducted.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: It might be nice to include such a paragraph.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks Reviewer AB19 for the comments and questions.
**Comment.** I think the theoretical contribution is incremental.
A. Thanks for the comment, we will first answer this comment in the beginning. The technical novelty of our research goes beyond mere improvement. The **technical novelty is the most important part** in our work. Let us clarify below.
Firstly, we provided a detailed comparison with Farnia et al. (2018) only because we both employ the Pac-bayes framework. This comparison aims to shed light on the mathematical difficulties involved in deriving robust bounds. However, it should not be mistaken as implying that our results are merely incremental improvements on Farnia et al. (2018). The discussion of Farnia et al. (2018) aims to convey two messages:
1) Existing technique cannot provide a robust bound, unless new assumption is introduced.
2) New mathematical tools is needed to prove a robust generalization bound.
We understand our comparison to Farnia et al. (2018) mislead the reviewers to consider our work is incremental. We will make revision to address this concern. Please see our detailed answer to common question 1.
___
**Q1.** The first weakness of the paper is probably that this newly derived bound largely relies on the work from Neyshabur et al., which has been phrased as an advantage of Theorem 1 being tight at least as Neyshabur et al.’s.
A. Thanks for the question. I consider this aspect to be a strength rather than a weakness. It is essential to note that this particular question is connected to "common question 1," and we believe that the answer provided in our response to "common question 1" adequately addresses your concern.
In summary, the bound presented by Neyshabur et al. holds significant value as a benchmark bound. Our findings successfully resolve the question that has spanned several years, namely, the tightness of a robust generalization bound.
___
**Q2.** However, this might be misleading as I believe the correct description would be when ϵ=0 Theorem 1 reduces to Theorem 2 so they are as tight as each other. I am not sure I understand what does it mean for Theorem 1 and 2 to be equally tight when ϵ>0?
A. Thank you for your question. Understanding the concept of "equally tight" can be viewed from two perspectives:
At the first level, you are correct that as ε approaches 0, Theorem 1 converges to Theorem 2. Consequently, the two bounds become equally tight in this scenario.
At the second level, when ε is greater than 0, it implies two things:
1) The term "B" (clean sample) in the standard bound corresponds to "B + ε" (adversarial example) in the robust bound.
2) The other factors in both bounds remain the same.
To elaborate further, it is proved that "B + ε" exists in the lower bound, as demonstrated in Theorem 3 by Xiao et al. in 2022. As a result, when deriving the robust bound, we should replace "B" with "B + ε". Moreover, since the other factors remain unchanged in both bounds, this is why Theorem 1 and Theorem 2 are deemed equally tight.
Xiao, J., Fan, Y., Sun, R., and Luo, Z.-Q. Adversarial rademacher complexity of deep neural networks. arXiv preprint arXiv:2211.14966, 2022.
___
**Q3.** Do you mean that what is in the big O notation is around the same magnitude?
A. No, it is not what we mean. Even though the answer is provided above, it is also important to notice that the big O notation omit a number 42 for both bounds.
___
**Q4:** However, I do not see the value of p (in ℓp) plays any role in the new bound and the paper talks about the general ℓp robustness, thus I assume the bound works in any ℓp space.
A. Thanks for the question, the main result (Theorem 1) is stated in ℓ2 norm. It is easy to calibrate the results to other ℓp attack by a factor $n^{1/2-1/p}$, where $n$ is the dimension of $x$. We delete this discussion when we condensed the article to 9 pages. Thanks for pointing it out, and we realize that deleting this part will cause unnecessary confusion. We will add the discussion to the updated version.
___
**Q5.** Another weakness is the empirical study to demonstrate the tightness of the bound in Table 1.
A. Thanks for the questions. There exists a misunderstanding of Table 1. Table 1 aims to show the effect of spectral complexity on standard and robust generalization. The discussion about Table 1 is provided in Sec. 5.
Table 1 is not to prove the tightness of the bound. Actually, we don’t need any experiments to show the proposed bound is tighter. The proposed bound is strictly tighter since we reduce an additional positive term.
We hope the fact above can address your main concern. The following detail questions are about fair comparison, which is not related to the main concern if our answer above is helpful. Yet we can still provide our answer.
**Q5.1.** Training details
As we state in Line 214, the experiment follows [3]. We use VGG-19 networks. As for adversarial training, we use a standard PGD-20 attacks with $\epsilon=0.5$.
[3] Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. Exploring generalization in deep learning. arXiv preprint arXiv:1706.08947, 2017a.
**Q5.2.** Also, what does ∞ means for Theorem 3? Does this mean the assumption about gradient norm in Theorem 3 does not hold? Do you have experiments about the gradient norms?
∞ means the bound is unbounded without additional assumption, as we state in line 198. Actually, it is highly non-trivial to estimate the lower bound of the gradient norms over the whole domain. We would like to replace ∞ by N/A in Table 1.
**Q5.3.** Moreover, I think it might be useful to use more models and more statistically significant numbers to show the proposed bound is tighter.
We don’t need any experiments to show the proposed bound is tighter. In mathematics, the proposed bound is strictly tighter.
___
We hope our responses have addressed your questions adequately. If you have any further inquiries, please feel free to ask.
---
Rebuttal 2:
Comment: Thanks for addressing (most of) my concerns. I have increased my score to 6.
---
Rebuttal Comment 2.1:
Title: Thanks for the response.
Comment: Thanks for the response. We have updated the paper based on the comments and questions. | Summary: This paper provides a tighter bounds for robust generalization compared to previous results and as tight as standard generalization.
Strengths: 1. The paper studies an important topic on adversarial robustness and provide a tighter bound with detailed theoretical analysis.
2. The paper is well-written and the conclusions are sound as far as I understand (I didn't check the proof in Appendix).
Weaknesses: * As the paper is an improvement on Farnia et al. (2018), thus the technical novelty is less significant.
* The authors should explain why a tighter upper bound is insightful for the robustness community. In my understanding, the robust generalization bound is usually used to explain which factor affects the robust generalization, or why robust generalization is worse than standard generalization. The authors should discuss more on the implications from the new result.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks Reviewer nNum for the comments and questions.
___
**Q1:** As the paper is an improvement on Farnia et al. (2018), thus the technical novelty is less significant.
**A:** Thanks for the question. the technical novelty of our research goes beyond mere improvement. The **technical novelty** is the **most important** part in our work.
Let us clarify below.
Firstly, we provided a detailed comparison with Farnia et al. (2018) only because we both employ the Pac-bayes approach. This comparison aims to shed light on the mathematical difficulties involved in deriving robust bounds. However, it should not be mistaken as implying that our results are merely incremental improvements on Farnia et al. (2018). The discussion of Farnia et al. (2018) aims to convey two messages:
1) Existing technique cannot provide a robust bound, unless new assumption is introduced.
2) New mathematical tools is needed to prove a robust generalization bound.
Secondly, as we discussed in line 117-130, related work in this field (no less than 7 papers, including Farnia et al.,'s Pac-Bayes approach) tried to extend the norm-based bounds to robust settings. However, the main difficulty is mathematics. The inclusion of supplementary assumptions or additional term presented in these work is less than ideal. These adjustments were made as a compromise due to mathematical issue. However, an ideal bound should be a clean extension of the generalization bound in standard settings.
Therefore, the technique to provide such an extension is important to the community. The significance of our paper lies in providing the math technique to resolve this problem.
Finally, since this is a common question, we provide our answer regarding the **significance of our result** in more detailed in “common question 1”. We understand our comparison to Farnia et al. (2018) mislead the reviewers to consider our work is incremental. We will make revision to address this concern.
We believe our detailed answer in "common question 1" could address your concern.
---
**Q2:** The authors should explain why a tighter upper bound is insightful for the robustness community.
**A:** Thanks for the question. Our bound offers **significant new insights** into adversarial robustness. It is discussed in Section 5. We understand the main concern or confusion come from the fact that we did not emphasize it is new. We will make necessary revision to distinguish old and new information.
As this is a common question from reviewers, we discuss the new insight of our result above in “common question 2”.
___
We hope our responses have addressed your questions adequately. If you have any further inquiries, please feel free to ask.
---
Rebuttal Comment 1.1:
Title: Post Rebuttal
Comment: Thanks for the response.
I have read the response. The response looks good but I believe that the authors still need some revision for the paper to make the presentation clearer. I'll keep my rating.
---
Reply to Comment 1.1.1:
Title: Thanks for the response.
Comment: Thanks for the response. We have updated the paper based on the comments and questions. | Rebuttal 1:
Rebuttal: We thanks all the Reviewers for the comments and questions. We will first answer three common questions.
**Common Question 1:** Significance of the result. (Reviewer nNum & AB19)
**A:** Our finding is not just an incremental improvement on Farnia et al. (2018); it holds **significant importance** in the field of learning theory, as highlighted by Reviewer FsEL. Let us clarify below.
Short version:
There's an important unsolved question over the past few years: Is there a clean/tight extension of norm-based generalization bound to robust settings?
No less than 7 papers (The work discussed in line 117 to 130) aimed at solving this problem. However, they have faced mathematical challenges. The most important contribution of our paper is to provide the **mathematical technique** (Presented in Sec. 6) to resolve this problem.
We understand our comparison to Farnia et al. (2018) **mislead the reviewers to consider our work is incremental.** However, this comparison aims to shed light on the mathematical difficulties involved in deriving robust bounds. We will make revision to address this concern.
___
Long version:
**1) Importance benchmark of norm-based generalization bounds.**
Firstly, norm-based complexity stands as a crucial generalization measure for ML models. This field has seen significant interest, with numerous published papers (not less than 30) exploring this topic. In 2017, Neyshabur et al. introduced a spectral-normalized bound for standard settings, using Pac-Bayes approaches. After that, progress in the field of norm-based bounds for standard training has experienced a temporary stall in recent years, with no tighter bounds being proposed.
(Additionally, similar spectral norm bound is provided by Bartlett et al., 2017 using covering number approach. The above-mentioned two results hold fundamental significance and have been widely discussed in prominent ML courses, such as Stanford CS229M (Lecture 5) and MIT Statistical Learning Theory (Lecture 16).)
As such, Neyshabur et al.,'s (or Bartlett et al.,'s) bounds serve as essential 'benchmark' generalization bounds. Therefore, we refer to this bound as 'benchmark bound'. A direct question arises.
**2) Fundamental question: Is there a clean/tight extension of norm-based bound to robust settings?**
However, this question troubles the community in the last few years.
Researchers found that it is mathematically challenging to derive robust bounds using both Pac-bayes approach (Farnia et al., 2018) and covering number approaches (the work discussed in lines 117-130).
As a result, these work often consider simplifying cases or introducing additional assumptions to bypass the difficulty. The resulting robust bounds is much larger than the standard bound, exhibit higher dependence on factors, such as depth and width, or include additional terms. It remains uncertain whether further reduction is possible. Such attempts did not offer a clean bound for robust generalization.
The significance of our paper lies in providing the math technique to resolve this problem. We demonstrate that the robust generalization bound can achieve the benchmark bound. All the higher dependencies on factors, additional terms, or extra assumptions can be eliminated.
**3) Our paper resolve this question.**
More importantly, our results temporarily close this question, without a tighter (norm-based) bound for standard generalization, no tighter (norm-based) bound for robust generalization will be provided.
___
**Common Question 2:** New insight to the robustness community. (Reviewer nNum & Ccn3)
**A: Our bound offers significant new insights into adversarial robustness**, as discussed in Sec. 5. One main source of confusion stems from the fact that we did not emphasize this crucial information in Sec. 5 cannot be provided by previous research. We will make necessary revision to distinguish old and new information.
Let us explain it below.
It is widely recognized that deep neural networks (DNNs) demonstrate good standard generalization ability but often exhibit poor robust generalization. A key question that needs to be addressed is:
**What factors contribute to such a significant difference?** (Line 204)
Previous studies have shown that the robust bound is much larger than the standard bound. Therefore, the mismatch term between these two bounds might be a contributing factor to the significant difference. For instance, the robust bound might have a higher dependence on width or include an additional term related to gradient information. Such factors (width, gradient) could potentially explain the disparity between standard and robust generalization.
**Current Hypothesis: The significant difference is due to Robust Bound >> Std Bound.**
However, verifying this hypothesis has been challenging because it remains unclear whether the existence of these factors is due to mathematical issues.
Our results provide a definitive answer and dismiss this possibility. In essence,
**Our result: Robust Bound $\approx$ Std Bound.**
We have shown that the additional factors (dependence, additional terms, assumptions) are solely due to mathematical considerations. Consequently, from a norm-based complexity perspective, they do not contribute to the significant disparity.
Therefore, in Section 5, we focus on attack intensity and spectral complexity.
From the norm-based complexity perspective, we show that only **attack intensity** and **spectral complexity**, (but not other factors), contribute to the significant disparity. We believe this is a significant new insights into adversarial robustness.
___
**Common Question 3: Limitation.**
We provide the limitation below, which will be added to Sec. Conclusion.
The main limitation is that the norm-based bounds are all large in practice, as it is shown in Table 1. How to obtain smaller bound in a practical scenario is an open problem. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Understanding the detrimental class-level effects of data augmentation | Accept (poster) | Summary: This paper studies Data augmentation (DA). Although DA improves overall accuracy, recent studies have pointed out that it can adversely affect individual class accuracy by up to 20% on ImageNet. This happens due to a lack of understanding of how DA impacts class-level learning dynamics. This research offers a framework to understand this interaction. By using high-quality multi-label annotations on ImageNet, it is found that most affected classes are inherently ambiguous, co-occurring, or involve fine-grained distinctions. Although multi-label annotations explain many previously reported performance drops, the analysis reveals other sources of accuracy degradation. It is demonstrated that class-conditional augmentation strategies, informed by this framework, can enhance the performance on classes negatively impacted by DA.
--------Post-rebuttal------------
The rebuttal has fairly addressed my concerns. Accordingly, I have improved my score.
Strengths: This study conducts an analysis of data augmentation bias, a process that bears significant practical implications for real-world applications. The meticulous examination of such bias provides insight into how it affects and shapes models in practical scenarios.
The research specifically investigates the impact of random crop scale on model training. It is revealed that the class-level performance drop experienced during ImageNet training can be mitigated through multi-label annotation. Intriguingly, the primary cause of this drop is found to be the co-occurring, ambiguous, and noisy conditions present within the class labels. This discovery is not only fascinating but potentially inspiring, opening up new avenues for further exploration in this domain.
The study introduces a novel class-conditional data augmentation strategy. This innovative approach shows promise in further alleviating data augmentation bias, thereby enhancing the robustness and performance of models in real-world deployments.
Weaknesses: This work is limited by only one augmentation method (i.e., random crop ) and only CNN-based architectures (ResNet and EfficientNet).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It would be beneficial to extend the analysis to encompass a wider range of augmentation methods, such as the hyperparameters in RandAugment and Data MixUp. This could potentially ascertain whether the observations drawn from the study are sufficiently generalized across various data augmentation methods, thus enhancing the validity and universality of the findings.
Are the observations consistent when applied to Vision Transformer (ViT)-based models? Although this study demonstrates the consistency of findings between ResNet and EfficientNet (both are Convolutional Neural Network (CNN) architectures), providing additional analysis with ViT-based models could significantly strengthen the overall solidity of the paper. This extended investigation would provide a more comprehensive understanding of how different model architectures interact with the described phenomenon.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their clear understanding of our submission and for noting the importance of our study! Please see our general response in which we detail our findings on new datasets, architectures and data augmentations (notably including the suggested ViT model). We observe that our insights, methodology, and DA intervention transfer to those different settings. As a brief summary, added experiments include one non-convolutional architecture (ViT, as per your suggestion), and three additional DAs (RandAugment, mixup, and colorjitter), and two new image datasets (CIFAR100 and Flowers102). We hope you can consider our responses in your final evaluation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I believe, with these new experimental results, this paper will become more solid and thorough.
I have another question driven by curiosity. I believe the following analysis could be insightful: could you provide a deeper analysis or make a summary on why label noise and label co-occurring result in performance drops and false positives with strong augmentation? I'm not asking for additional experiments, just curious about the essential reasons.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you for engaging in further discussion, and for your review feedback which helped us expand the scope of our work and made the submission stronger with the new analysis and experimental results!
In our paper we showed that among the classes that are negatively affected by strong data augmentation are often ambiguous, co-occurring and fine-grained categories (measured by co-occurrence frequency in ReaL labels and semantic similarity of categories).
To explain the intuition behind the class-level accuracy drops and false positive mistake increase, we can take co-occurring classes “academic gown” and “mortarboard” as an example, however, similar mechanisms apply to other confusion types as well. Since 60% of the examples with label “mortarboard” also have ReaL label “academic gown”, sometimes random resized crops (RRC) of the images labeled as “mortarboard” will focus on the gown but the model would still be optimized to predict the “mortarboard” label on such training examples. As a result, the model will become biased to predict “mortarboard” on “academic gown” images, leading to degradation in “academic gown” accuracy, and increased false positive mistakes for “mortarboard”. In general, we will observe similar effects whenever strong RRC applied to class $l$ will often focus on the features that effectively correspond to the features of another class $k$: this will lead to degradation in accuracy for class $k$ and increase in false positive mistakes for class $l$.
For ambiguous or nearly identical categories, such as “sunglass” and “sunglasses”, the classes might have slight differences induced by idiosyncrasies of the labeling pipeline (such as the images from one of the classes are more often zoomed in than from the other class), and due to such statistical differences the data augmentation strength will control the bias towards one of the two plausible categories.
Please let us know whether this answers your question, we will be happy to further provide clarifications or answer any other questions during the discussion period! | Summary: The authors explore the role of random resize crop in ImageNet performance. First, they improve on analysis in prior work and show that class-level performance degradation has been over-stated, and that when multi-label annotations are used one of the labels is often still predicted. Next, by inspection, the authors find that many classes whose accuracies are affected random-resize crop are either completely ambiguous (sunglass vs sunglasses) or co-occurring (suit, tie; car, wheel). Finally, they suggest an intervention which takes false positives into account in addition to false negatives, dramatically improving accuracy on most-affected classes while preserving overall accuracy.
Strengths: Originality: The idea of using multi-label annotations to judge the severity of the mistakes is sensible.
The idea of using False Positives for conditioning augmentation is an interesting one.
In general, the idea of using labels in the training procedure as a conditioning mechanism outside of the loss is interesting.
Quality: The methodology seems sound and the claims are supported by the evidence.
Clarity :The paper is clearly written and the figures are well understood.
Significance: The paper is significant in that it sheds light on a process which is often used as a black box (data augmentation), and moves us towards more bespoke models. It is the case that we often report best top-1 accuracies, averaged over a validation set, and that this obscures the variation in different kinds of mistakes. However, certain kinds of mistakes are much more expensive than other kinds. Then, improving performance for certain classes while retaining performance of others is an important intervention for developing deployable models. This work takes a step towards the understanding needed for making these kinds of trade offs.
Weaknesses: * The analysis is done only on a single model (supervised ResNet-50), on a single dataset (ImageNet). While I expect conclusions to be similar across other datasets/models, it would be interesting and important to confirm this.
* Similarly, only consider RRC. What about other common augmentations?
* ImageNet pretraining is frequently used for transfer learning. Does the class-conditional augmentation intervention impact transfer performance?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weaknesses.
* Does the intervention have more impact on learned features, or on the final classification layer? In other words, if the backbone was frozen after standard training and the final layer was retrained using the proposed intervention, what would the effect be?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Authors have adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review! We appreciate that you find our work interesting, significant and clearly written.
**Model, dataset, data augmentation**: We agree that confirming the applicability of our insights and methods to other models and datasets is crucial. To that end, we have provided supporting experiments which include a non-convolutional architecture (ViT-S), two datasets (CIFAR100, Flowers102), and three DAs (colorjitter, mixup and RandAugment). While we describe in detail the experiments and findings in the general answer, we mention here that our conclusions hold in those settings, and that the intervention we propose also transfers to those cases. We hope that those findings will provide further evidence for the validity of our findings in the broader context of computer vision.
**Effects on learned features vs final layer**. We have explored a similar experiment in the Appendix F of our submission: we experimented with fine-tuning the model from the checkpoint trained with the strongest augmentation using class-conditional policy with augmentation strength changed for 10 classes as in Section 6. However, we found that such fine-tuning leads to a drop in average accuracy and accuracy of the negatively affected classes. We hypothesize that this might be due to a feature extractor learning unwanted invariances or model memorizing the training data. In the future analysis, we will explore whether it is possible to alleviate DA bias if we fine-tune the model from an earlier checkpoint as opposed to fully trained model or if we use additional held-out data for fine-tuning.
**Transfer learning**: This is a very interesting suggestion! We prioritized looking at different architectures, datasets and DAs for our rebuttal, but we agree that it is an important avenue that we hope to explore in the future.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: The authors have sufficiently answered my concerns in the rebuttal, and I have decided to keep my ratting of of 7(accept). | Summary: The paper presents a meta study on the effects of data augmentation over classes. In particular, authors works on ResNet50 architectures trained on ImageNet, and show how for some classes, strong data augmentation drastically decreases the per-class accuracy. The paper focuses on random resized crop augmentations and specifically by varying the lowest possible crop size. Authors discuss these results in light of different types of overlap between classes and show how considering a different ground truth shows that the models are not always learning something wrong, but maybe learn to focus on object parts rather than whole objects.
Strengths: - Interesting study overall, which extends prior work on the same topic. The paper exposes some effects that might be obvious, but does so in a systematic and well constructed manner.
- The paper is very well written, and well presented. It was easy to follow and to understand main points.
- Supplementary materials contain interesting plots and additional results nicely complementing the paper.
Weaknesses: - Limited to one data augmentation strategy, it would be interesting to see how other potentially aggressive class-confusing augmentations effect the classification scores (eg. colour changes, contrast enhancements, affine transformations, etc.) interact with RRC and among them. I understand this would make such analysis considerably more complex, but in the end, those are commonly used and it rarely happens that one relies exclusively on RRC.
- The study is strongly limited by the use of ImageNet only, other datasets have multilabel annotations. It would be really beneficial to understand the effects on other datasets, where the total number of classes is lower and potentially the relatedness across classes also reduced.
- Similarly, it would be interesting to understand if other CNN models or ViT would behave the same. I think it would nicely close the circle. I think such study on different models and different datasets would be quite interesting overall.
- In conclusion, although well done and well presented, I think this paper does not make a substantial contribution in this current form, but could potentially do so if the breadth and scope are extended.
- I kept wondering what would happen if I train / fine tune the classifier in a multi-label setting, instead of just using ReaL as enhanced ground truth. It could be possible to do so using eg. coco?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - L33: as pointed out above, it would be interesting to see the effects on other datasets where classes are less and ideally only a subset of classes interact. Is there any literature on this that can be used here?
- I would try to motivate why only the RRC is used. Be it simplicity, be it as an example of widely used augmentation, but I could not stop thinking in the paper how other DA strategies would behave.
- I wonder if some dataset annotations where object parts annotations are available (eg https://github.com/TACJu/PartImageNet) could not be used to study in more detail the relationship between local crop and object parts
- I missed at the end of Section 2 a summary of main limitations of current SOTA and to what this paper is answering specifically.
- I would try to motivate (section 3 beginning) why only focusing on RRC, ImageNet and ResNet50. I agree on the choices, but is there something more generic we can draw out of this? What if I replace the imageNet with a DenseNet or use ResNet to classify COCO in multilabel settings? I missed some more general guidelines and motivation on the study.
- L215-128 / L248-251: Would it be relevant to check if the model, in its top-K predictions, contains the classes that ReaL would show as most ambiguous? Independently of the object part relationship / co-occurrences / semantic similarity, a biased model should tend to predict those classes with high probabilities, maybe the predictions themselves and the co-occurrence of predictions could somehow show some of these effects.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not discuss limitations of the study, but I think some points mentioned above could be discussed, at least, to strengthen motivations and underline that some of the effects observed could be observed in general.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Please also see our separate general post, which contains new experiments inspired by your comments and clarification on the setup. Inspired by your feedback we have significantly increased the scope of our paper, and included results for new architectures, datasets and data augmentation types. We hope you can consider our responses in your final evaluation, and please let us know if we can answer any additional questions during the discussion period.
### Limitations of previous studies and motivation and contributions of our work
Although recent studies observed that data augmentation may lead to severe class-level accuracy drops [1, 2], no method has yet addressed this issue due to limited understanding on the origins of that performance degradation. To our knowledge, our work is the first systematic investigation narrowing down the root cause of class-level accuracy degradation when using DA. As a direct result of this finding, we propose a class-dependent DA intervention that takes into account the trade-off between class-level false negative and false positive mistakes. Unlike previous attempts to mitigate this issue [1], our data augmentation intervention solely changes the DA strength for a few classes, significantly improving the accuracy of the negatively affected classes, while retaining strong accuracy on average.
### The choice of the dataset, models and augmentation type, and additional experiments
The focus on ImageNet stemmed from previous studies which reported class level accuracy drops primarily on that specific dataset[1, 2]. Inspired by your suggestion, we consider additional datasets to extend our insights beyond ImageNet, and include analysis on CIFAR-100 and Flowers102 in our general response.
Similarly, our focus on RRC stems from it being the most impactful DA in terms of final average accuracy increase. Most computer vision models use RRC with [8%, 100%] bounds, sometimes combined with additional DAs. Yet, it is also natural to ask if our conclusions extend beyond RRC which is why we have considered RandAugment, colorjitter, and mixup in our general answer.
Beyond the dataset and DA, the choice of architecture is also crucial. While we explored ResNet and EfficientNets, we acknowledge that those belong to the same convolutional family, and thus, as per your suggestion, we have added ViT-S in our general answer.
In all of these new experiments we found results to be consistent with our observations in the main paper. We hope that the addition of two datasets, one non-convolutional architecture, and three additional DAs help reinforce our methodology and conclusions.
[1] Balestriero et al, The Effects of Regularization and Data Augmentation are Class Dependent, 2022
[2] Bouchacourt et al, Grounding inductive biases in natural images:invariance stems from variations in data, 2021
---
Rebuttal Comment 1.1:
Title: Rebuttal Follow up
Comment: I would like to thanks Authors for taking the time and considerable effort to address my questions and limitations. The rebuttal and the additions to the paper are significant and indeed address the points I raised as limitations and weaknesses. The additional model (ViT-S) is very relevant, and complements well. Adding DA strategies also removes some levels of doubts regarding generality of the work. Additional analyses are relevant and informative.
I might have been a bit strict on my previous review, I really wanted the scope of the work not to be limited by models and augmentations strategies. I like the separation of the modes of errors into visual ambiguity, class co-occurrence, fine grained distinctions, and semantically unrelated but likely visually interacting features. I think these aspects are important to be considered for a high-level understanding of what those models learn, but also how to best use them.
After considering the rebuttal, the response to the review(s) and considering again the contribution, I think the research paper has now filled the gaps I felt were limiting the contribution, and I am happy to raise the score to Accept. I look forward to further studies relating how to use multi-label annotations and object parts to train more robust models, or have an idea how we could make data augmentation strategies be consistent with semantics and observers viewpoint (eg. object parts or full object class) to train better models.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you very much for your feedback which helped us expand the scope of our work and for taking into consideration our response in your final score! | Summary: The authors study the effect of data augmentations on the classwise performance under data imbalance. They focus on the rezised cropping operation and distinguish between 4 different failure cases by using multi-label annotations. They also show that it is possible to recover some of them but using an informed class-conditional augmentation.
Strengths: The tackled problem is very relevent since it aims to understand the failure cases introduced but the standard data augmentation pipeline.
The authors provide decent empirical support to their claims and the paper is overall well written.
Weaknesses: * Although the introduced class conditional augmentation seems to help recovering some failures cases, the overall performance of the model is not improving! This unexpected behavior is worth more investigation
* The last category of failures denoted 'semantically unrelated' remains unexplained
* Minor: in line 267, 268: the authors claim that figure 3 shows the confusion between bath towel and pillow while it acctually shows the confusion between muzzle and sandal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Improving the qualit of this paper would definitely require adressing the major weakness cited above
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and constructive feedback! We respond to your questions below, and we will remain available for any further discussion and clarifications throughout the discussion period.
### Average performance improvement
The reviewer raises an interesting point regarding the average performance not improving upon adjusting the per-class DA strength. We emphasize that the classes on which our intervention changes the DA only account for a very small portion of the dataset (1-5%). As such, even in the ideal scenario in which we significantly improve the accuracy on these classes, the impact on the average performance will be quite minimal. For example, a 10% accuracy improvement on 1% of the classes would only result in 0.1% improvement in average accuracy. Consequently, the goal of our intervention is to improve the performance specifically on the classes negatively affected by the default augmentation policy, and not the average performance. In the paper, we show that our intervention improves the performance on the target classes, while not significantly affecting the performance on the remaining classes, as expected.
We also note that interventions targeted at improving performance disparities often significantly *hurt* the average performance, which is not the case with our intervention. Moreover, we believe that our observations could inspire future work, which would target improving the average performance, by designing more elaborate augmentation policies.
### Categories of class confusions
One of our key conclusion is that DA (e.g. RRC) reduces the performance of the model on a minority of the classes by introducing confusing (image, label) pairs which could be attributed to different classes. The prototypical example we employed was on how the RRC of a car picture might produce a wheel picture. As such car and wheel can be considered as ambiguous, or related, under the RRC DA. Therefore, by monitoring the True Positive, False Positive, False Negative and True Negative rates of those two classes, a relation can be drawn (this is our main observation also enabling our per-class DA intervention).
For the **semantically unrelated** confusion category, the classes often share similar visual features. For example, the classes muzzle and both show a net-like structure. Similarly, the classes flute and rifle share a long tube shape. Depending on the augmentation strength the model assigns objects with these features to one of the two conflicting classes.
**Line 267, 268**: We thank the reviewer for their careful reading and for noticing that mistake, this will be corrected in our final submission.
We hope the above answers have alleviated your concerns, but if you have any further questions we are happy to discuss at any point throughout the discussion period.
---
Rebuttal Comment 1.1:
Title: Official comment by the area chair
Comment: Dear Reviewer,
The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback! We are happy that the reviewers found our paper is “systematic and well constructed” (nQVM), “bears significant practical implications” (wB8d) and “sheds light on a process which is often used as a black box” (EjrR). Data augmentation (DA) is essential in deep learning, yet it is poorly understood. Recent studies observed that DA may lead to severe class-level accuracy drops [1, 2]. To our knowledge, our submission is the first systematic investigation of these degradations, narrowing down their cause and partially alleviating them. In particular:
(1) We correct prior work analysis on ImageNet using multi-label annotations: many previously reported cases of class-level performance degradation are explained by label ambiguity;
(2) We systematically categorize class confusions exacerbated by DA, using multi-label annotations and class similarities, revealing that the majority of these confusions concern ambiguous, spuriously correlated or fine-grained categories;
(3) We propose a class-dependent DA intervention inspired by our findings. Unlike previous attempts [1], our intervention only changes the DA strength for a few classes, and significantly improves the accuracy on the affected classes.
We believe that our work makes significant advances towards reliably deploying DA in the real world, where we must understand and remedy its detrimental effects to avoid costly mistakes. We have added multiple experiments inspired by reviewer feedback, and hope these results can be considered in the final assessment.
## Additional Experiments
The main feedback shared across reviewers was on the generality of our results beyond ImageNet, ResNet-50, and the Random Resized Crop (RRC). We have conducted several novel experiments during the rebuttal and significantly extended the scope of our work, as we describe below. We will add these results to our final submission.
### Vision Transformer
We train a Vision Transformer model ViT-S [3, 4] varying the RRC scale lower bound $s$ in the set {10%, 20%, …, 90%} and report the results in Figure 1 of the pdf. Generally, we confirm that our observations hold for ViT.
While the optimal average accuracy is obtained with the strongest augmentation, for several classes accuracy significantly degrades. Evaluation with multi-label annotations reveals that some of the confusions are due to inherent label ambiguity or class overlap (Figure 1 left). We also identify the same high-level class confusion categorized as ambiguous, co-occurring, fine-grained and semantically unrelated (Fig 1 right).
By conducting a data augmentation intervention from Section 6 of the paper and changing the RRC augmentation strength for just 10 classes, we improve the accuracy on the degraded classes by over 3% (from 52.28 $\pm$ 0.18% to 55.49 $\pm$ 0.07%).
### Additional data augmentation types
**RandAugment**: RandAugment [6] randomly applies color perturbations, translations and affine transformations. We train ViT-S with the RRC $s=10\%$ and vary the RandAugnment magnitude $m$ in the range {1, 3, … 9} ($m=9$ is standard for ViT [3, 4]; values above $m=9$ lead to significant degradation). We report results in Fig 2 left. While RandAugment strength has a smaller effect on accuracy than RRC, we still observe an increase of around 0.5% in average performance with $m=9$. However, that comes at the cost of about $4\%$ accuracy drop for a minority of classes. In Figure 2 we show examples of class confusions exacerbated by RandAugment, which we can categorize analogously to Section 5 of our paper.
**colorjitter**: We train a ResNet-50 using the strongest RRC augmentation with colorjitter applied with probability 0.5 and intensity $c=0.1$ for all parameters (brightness, contrast, saturation and hue). Applying colorjitter with higher probability or intensity leads to degraded average accuracy while colorjitter $c=0.1$ leads to a slight improvement of $0.1\%$ in average accuracy. In Figure 2 right we show the distribution of accuracy improvements and degradations due to colorjitter, as well as the examples of class confusions that were exacerbated.
### New datasets and augmentation
**CIFAR-100 + mixup**: We study mixup [7] augmentation for ResNet18 on CIFAR-100. We train for 100 epochs using Random Crop and mixup with $\alpha=0.5$ improves the average accuracy from $78.11 \pm 0.15\%$ to $78.53 \pm 0.35\%$. However, we observe degradations for some per-class accuracies (see Figure 3). The exacerbated confusions are mainly within the same superclass categories of CIFAR-100 which is aligned with our prior results on ImageNet where we observed that fine-grained confusions are more significantly affected by augmentation.
**Flowers102**: We study the effect of applying standard RRC in Flowers102 classification task, and while using augmentation improves the average accuracy by $2\%$, we observe that some classes are negatively affected (see Figure 3).
## Motivation of the setup
While we significantly increased the scope of our analysis in the rebuttal, we would like to clarify the motivation of the setup in our submission. We focus on ImageNet as it is a widely used large-scale benchmark, and the class-level accuracy drops were previously noticed specifically on ImageNet [1, 2]. We chose to study Random Resized Crop as it is used in training most modern state-of-the-art computer vision models [e.g. 3, 4, 5], and it leads to the most significant improvements in terms of average accuracy, compared to more modest accuracy gains achieved with colorjitter, mixup, or RandAugment.
### References
[1] https://arxiv.org/abs/2204.03632
[2] https://arxiv.org/abs/2106.05121
[3] https://arxiv.org/abs/2106.10270
[4] https://arxiv.org/abs/2012.12877
[5] https://arxiv.org/abs/2201.03545
[6] https://arxiv.org/abs/1909.13719
[7] https://arxiv.org/abs/1710.09412
Pdf: /pdf/7ebac4537d8045907ab43bff755448625586c652.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Complexity of Derivative-Free Policy Optimization for Structured $\mathcal{H}_\infty$ Control | Accept (poster) | Summary: This paper considers solving the $H_\infty$ control problem using zero-th-order policy optimization. The main results are sample complexity bounds for both the exact Oracle setting and the model-free setting. Numerical simulations are conducted to demonstrate the effectiveness of the algorithm.
Strengths: This paper is well-written. The $H_\infty$ control problem is known to be challenging and the policy optimization results are likely to be of great interest to the learning-in-control community. The sample complexity result, while not necessarily optimal, is the first non-asymptotic result in the literature.
Weaknesses: (1) Related Work: The related work section could be more structured. I went through [32] to complete the review of this work. It seems that the main technical tools are already developed in [32]. However, [32] does not provide any sample complexity result. What are the major technical challenges (and the ideas used to overcome them) in going beyond the global convergence in [32] to the sample complexity results in this work?
(2) Theorem 3.7 and Theorem 4.2: Usually for high probability bounds, as the tolerance level $v$ decreases, more iterations are needed. However, for Theorem 3.7 and Theorem 4.2, as $v$ decreases, $T$ also decreases, which seems counter-intuitive. Moreover, the bound has a polynomial tail rather than an exponential tail. Is this an artifact of the proof or are exponential tail bounds not achievable?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: From the numerical simulations, is it possible to tell what the convergence rate of the algorithm is?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The sample complexity bounds are not necessarily optimal, which was pointed out by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking time and effort to review our manuscript. We sincerely appreciate all your valuable comments and suggestions. Please see our responses below.
**Related Work: The related work section could be more structured. What are the major technical challenges (and the ideas used to overcome them) in going beyond the global convergence in [32]?**
We thank the reviewer for the valuable comment. For the $H_\infty$ control problem discussed in [32], the policy update is performed using the minimum-norm element of the associated Goldstein’s subdifferential. The main idea is based on the fact that such a minimum-norm element is a descent direction of the original cost function $J(K)$. While in our work, we aim to obtain the sample complexity of the zeroth-order methods for solving structured $H_\infty$ control problem. To this end, the zeroth-order oracle is used to construct an estimator of $\nabla J_\delta (K)$, which is a descent direction of the smoothed function $J_\delta (K)$. We obtain our sample complexity results based on the fact that an $\epsilon$-stationary point of $J_\delta (K)$ is also a $(\delta, \epsilon)$-stationary point of the original nonsmooth cost function $J(K)$. Indeed, there are several technical challenges in developing the sample complexity via this approach:
1. **Feasibility of the generated controllers.** In our control setup, unlike the unconstrained optimization problems, we need to ensure that the iterate $K^t$ and the perturbed iterate $K^t \pm \delta W^t$ stay within a non-convex feasible set (namely, the set of stabilizing policies). Previous work on policy optimization theory of $H_\infty$ control addresses this feasibility issue via using the coerciveness of $J(K)$ and mainly relies on the fact that $J(K)$ is a barrier function on the non-convex set of stabilizing policies. Such previous results rely on model-based algorithms (such as Goldstein's subgradient method) which can decrease the value of $J(K)$ directly. In our paper, we consider the model-free setting and hence need to use zeroth-order policy optimization. However, the zeroth-order policy search can only decrease the value of the smoothed function $J_\delta(K)$, which is not coercive over the non-convex feasible set and hence cannot be used as a barrier function. Importantly, the descent of $J_\delta(K)$ does not imply the descent of the original function value and hence does not ensure feasibility by itself. As a matter of fact, how to choose $\delta$ to ensure that $J_\delta$ is well defined is already non-trivial. Consequently, we need to design the smooth radius $\delta$ carefully (as indicated in Theorems 3.6, 3.7, and 4.2) to ensure the feasibility of the iterates.
2. **Inexact Oracle.** For Theorem 4.2, we consider the inexact oracle case which is particularly relevant for the model-free control setting. Specifically, we are using imperfect estimates of $J(K)$ that are calculated using the model-free MIMO power iteration method. Therefore, an extra statistical error term appears in the iterations of zeroth-order policy optimization, and requires special treatment. Such an extra term has not been considered in the literature of zeroth-order optimization for nonconvex nonsmooth problems. To address this extra technical difficulty, we first establish sample complexity bounds for $H_\infty$ norm estimation of the general MIMO system. Then we carefully propagate such sample complexity bounds to obtain an error bound for $\nabla J_\delta$ in terms of $\epsilon$ and $\delta$.
We will include a more structured related work section and highlight our contributions in the revised manuscript. We emphasize that our study is more relevant to the model-free learning-based control setting, since our sample complexity results address the model-free control directly.
**Theorem 3.7 and Theorem 4.2: Probability bounds seem counter-intuitive. Moreover, the bound has a polynomial tail rather than an exponential tail. Is this an artifact of the proof or are exponential tail bounds not achievable?**
We thank the reviewer for such an insightful comment. We agree with the reviewer that in general, more iterations are needed to decrease the tolerance level. Our results are also intuitive. Specifically, in both Theorem 3.7 and 4.2, Statement 1 suggests that as $T$ increases, the probability that all the generated controllers are stabilizing will decrease. This is because our algorithm uses zeroth-order oracle to build an estimator of the smoothed function gradient. As $T$ increases, the biases and variance of the gradient estimation accumulate, resulting in a larger failure probability. In addition, Statement 2 suggests that as $T$ increases, the probability of finding a $(\delta, \epsilon) $-stationary point will first increase and then decrease. This is also intuitive, when $T$ is too small, more iterations will improve the performance of the generated controllers, but for large $T$, the probability of generating unstable controllers becomes dominant. We will add the above discussions in the revised manuscript.
The reviewer is right about the polynomial tail about our bounds. This is the result of the proof techniques, and our polynomial bounds match the existing one for unconstrained optimization.
**Convergence rate from the numerical simulations.**
From the left plot of Figure 1, the convergence rate in terms of iteration vs. error looks sublinear. This is consistent with our complexity theory which can be viewed as a "sublinear rate" result.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of the Rebuttal
Comment: Thank the authors for their detailed response. I do not have further questions. | Summary: This paper focuses on the structured $H_\infty$ control problem. They provide sample complexity bounds for policy optimization in $H_\infty$ control problem.
The results are provided for two separate scenarios namely:
- Exact Oracle Setting (exact $J(K)$ for any $K$ is available, for the given closed loop system)
- Inexact Oracle Setting (the system matrices are not known)
The theoritical results provide the sample complexity of $H_\infty$ norm estimation.
Finally, the paper provides a few numerical experiments supporting their theoritical results along with comparison to some model-based approaches.
Strengths: - $H_\infty$ control problem is one of the important setting in linear systems , which is well studied in adaptive control literature but has received less attention in recent learning theory literature unlike the standard LQR setting. The paper highlights the various challenges involved in the analysis of $H_\infty$ control due to non-convexity and non-smoothness.
- The Sample complexity of the $H_\infty$ norm estimation are provided by exploiting the randomized smoothing techniques.
Weaknesses: - The algorithm relies on access to an oracle. More discussion on the oracle is warranted.
- The simulation section in the main body (as well as the appendix) of the paper is meager. The presentation quality of plots included can be improved.
- Authors mention in the appendix that (line 806) that when necessary, one can reinitiate to avoid bad local minima, but how would one know that they are at a 'bad' local minima.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Following are my concerns/ questions regarding the paper.
- Can authors explain why they call this approach as "derivative free" approach since calculating $g_t$ is required in the algorithm?
- I would like to know the motivation behind considering the two separate type of oracle scenarios.
- Authors mention that their probability bounds can be sharpened (line 248), I request authors to present the best possible results in the main body of the paper.
I also request authors to address the comments in the weakness section if possible.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The results rely on access to an oracle.
Social Impact: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking time and effort to review our manuscript. We sincerely appreciate all your valuable comments. Please see our responses below.
**More discussion on the oracle is warranted.**
We appreciate the valuable suggestion from the reviewer. Our paper considers two zeroth-order oracles: the exact oracle is standard for zeroth-order optimization literature and natural for the model-based control setting, while the inexact oracle is relevant for the model-free learning-based control setting. The exact oracle assumes that we can exactly calculate $J(K)$ (which is the closed-loop $H_\infty$ norm) for every stabilizing $K$. When the system dynamics are known, such an oracle is available since the closed-loop $H_\infty$ norm can be efficiently calculated using existing robust control packages in MATLAB (currently, the state-of-the-art techniques for model-based $H_\infty$ norm calculations rely on using the relation between the singular values of the transfer function matrix and the eigenvalues of a related Hamiltonian matrix [BBK1989,BS1990]).
[BBK1989] Boyd, Balakrishnan, and Kabamba, 1989. A bisection method for computing the $H_\infty$ norm of a transfer matrix and related problems. Mathematics of Control, Signals and Systems.
[BS1990] Bruinsma and Steinbuch, 1990. A fast algorithm to compute the $H_\infty$-norm of a transfer function matrix. Systems & Control Letters.
However, in the model-free learning-based $H_\infty$ control setting, the system dynamics are unknown, and $J(K)$ (the closed-loop $H_\infty$ norm) can only be estimated from the input/output data of a black-box simulator of the underlying system. The inexact oracle is natural for such a model-free setting and can be provided by model-free $H_\infty$ norm estimations methods (see the beginning of Section 4.1 for a review). Our paper uses the model-free time-reversal power-iteration-based $H_\infty$ estimation from [WSH2010] to serve as the inexact oracle for $J(K)$. Despite the existence of such algorithms, the prior literature lacks sample complexity bounds for general MIMO systems. Therefore, we first present the first sample complexity result for $H_\infty$ norm estimation for general MIMO systems in Theorem 4.1. Building upon this, we obtain the first sample complexity results for model-free policy optimization of $H_\infty$ control with noisy function values. We will add more discussion on this in the revised manuscript.
[WSH2010] Wahlberg, Syberg, and Hjalmarsson, 2010. Non-parametric methods for $\ell_2$-gain estimation using iterative experiments. Automatica.
**The simulation section in the main body (as well as the appendix) of the paper is meager. The presentation quality of plots included can be improved.**
We thank the reviewer for the constructive comments. We agree with the reviewer that the presentation quality of the plots should be improved. We have revised the plots via proper scaling and adding more explanations. Please see our uploaded one-page pdf file. If the reviewer has more concrete suggestions, please let us know. Any comments are highly appreciated and we will revise accordingly.
**How would one know that they are at a 'bad' local minimum.**
That statement is misleading and we will revise it. In practice, one can use multiple initial points and run the algorithm multiple times then just report the best case with the lowest $J(K)$. Such a strategy is used in existing packages such as HIFOO and Hinfstruct. So one does not explicitly check whether the solution is a bad local minimum or not. In general, there are no polynomial-time guarantees for finding the global solutions of such nonconvex problems.
**Can authors explain why they call this approach as "derivative free" approach since calculating $g_t$ is required in the algorithm?**
In Algorithm 1, we use the function value $J(K)$ to build an estimation of $\nabla J_\delta J(K)$. The function value is typically referred to as “zeroth-order oracle” in zeroth-order optimization literature. The methods that use zeroth-order oracle are referred to as derivative-free methods or zeroth-order methods [GL2013, CSV2009].
[GL2013] Ghadimi and Lan, 2013. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization.
[CSV2009] Conn, Scheinberg, and Vicente, 2009. Introduction to derivative-free optimization. SIAM
**Motivation behind considering the two separate types of oracle scenarios.**
We first study the exact oracle case, which is a standard oracle assumption in zeroth-order optimization literature. This oracle can be obtained when the system matrices are known (model-based oracle). Building upon the complexity results under this assumption, we further extend our analysis to a more practical context: the inexact oracle scenario, where we only have access to a black-box system simulator for generating imperfect estimates of $J(K)$. This aligns more closely with learning-based control where system models are unknown (model-free oracle).
**The best possible results in the main body of the paper.**
We are sorry for the confusion here. We really mean that the constant factors of the probability bounds in Theorems 3.7 and 4.2 can be improved by e.g., increasing the level of $\mathcal{S}^1$, using smaller step sizes, using smaller smooth radius delta, etc. For example, in the proof of Theorem 3.7, if we choose a larger sub level of the set $\mathcal{S}^1$, then we can obtain a refined constant term in the probability bounds. However, we want to emphasize that refining the constant factors will not change the order dependence of $\epsilon$ and $\delta$ in the sample complexity results (Eq. 12, Eq. 21), which stands as the most crucial aspect of the sample complexity theory. The order of our curret sample complexity results is already the best as we can find. We will clarity this point in revision.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: I thank the authors for their responses and incorporating the modification regarding the presentation of simulations. After reading their rebuttal, I will retain my recommendation regarding this paper. | Summary: This paper studies the static output feedback $\mathcal{H}_{\infty}$ control problem. It proposes a derivative-free policy optimization algorithm via randomized smoothing and further provides sample complexity analysis for the cases with exact and inexact zeroth-order oracles. To validate the performance of the new algorithm, the authors also conduct some numerical experiments and compete against the model-based methods in the literature.
Strengths: 1. As the authors claim, the proposed algorithm is the first derivative-free policy optimization algorithm for constrained structured $\mathcal{H}_{\infty}$ control problem (there have been some works on the unconstrained setting).
2. The authors also consider the inexact oracle setting.
3. The paper provides both theoretical analysis and numerical experiments.
Weaknesses: I am not very familiar with the problem studied by this paper and it looks fine to me. However, I feel like the paper is mainly a combination of existing techniques (such as randomized smoothing and gradient sampling with zeroth order feedback). Would the authors highlight any novel techniques they apply in the analysis or explain what makes the problem different from other nonsmooth nonconvex problems such that this paper is not simply A+B?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in evaluating our manuscript. We value your insightful comments and suggestions. Below are our responses.
**Would the authors highlight any novel techniques they apply in the analysis or explain what makes the problem different from other nonsmooth nonconvex problems such that this paper is not simply A+B?**
We thank the reviewer for the valuable comment. We would like to highlight the novelty of our main results presented in Theorem 3.6, 3.7, and 4.2 in twofold:
1. **Feasibility of the generated controllers.** In our control setup, unlike the unconstrained optimization problems, we need to ensure that the iterate $K^t$ and the perturbed iterate $K^t \pm \delta W^t$ stay within a non-convex feasible set (namely, the set of stabilizing policies). Previous work on policy optimization theory of $H_\infty$ control addresses this feasibility issue via using the coerciveness of $J(K)$ and mainly relies on the fact that $J(K)$ is a barrier function on the non-convex set of stabilizing policies. Such previous results rely on model-based algorithms (such as Goldstein's subgradient method) which can decrease the value of $J(K)$ directly. In our paper, we consider the model-free setting and hence need to use zeroth-order policy optimization. However, the zeroth-order policy search can only decrease the value of the smoothed function $J_\delta(K)$, which is not coercive over the non-convex feasible set and hence cannot be used as a barrier function. Importantly, the descent of $J_\delta(K)$ does not imply the descent of the original function value and hence does not ensure feasibility by itself. As a matter of fact, how to choose $\delta$ to ensure that $J_\delta$ is well defined is already non-trivial. Consequently, we need to design the smooth radius $\delta$ carefully (as indicated in Theorems 3.6, 3.7, and 4.2) to ensure the feasibility of the iterates.
2. **Inexact Oracle.** For Theorem 4.2, we consider the inexact oracle case which is particularly relevant for the model-free control setting. Specifically, we are using imperfect estimates of $J(K)$ that are calculated using the model-free MIMO power iteration method. Therefore, an extra statistical error term appears in the iterations of zeroth-order policy optimization, and requires special treatment. Such an extra term has not been considered in the literature of zeroth-order optimization for nonconvex nonsmooth problems. To address this extra technical difficulty, we first establish sample complexity bounds for $H_\infty$ norm estimation of the general MIMO system. Then we carefully propagate such sample complexity bounds to obtain an error bound for $\nabla J_\delta$ in terms of $\epsilon$ and $\delta$.
Built upon this, we demonstrate that Algorithm 1 remains effective even with an inexact oracle, ensuring the feasibility of the iterates while achieving finite-time sample complexity with high probability. The inexact zeroth-order oracle has not been considered by any previous papers.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I do not have further questions. | null | null | Rebuttal 1:
Rebuttal: We deeply appreciate the insightful feedback provided by the reviewers. In response to the comments from Reviewer vRAL, we have attached a PDF file containing the updated plots from the main paper. Each comment from the reviewers has been addressed below. We hope our explanations have resolved the reviewers' concerns. Please let us know if you have any additional questions or require further clarification, all comments are highly appreciated.
Pdf: /pdf/1aca1703cea161a60088ef956f8d980a2f61fc44.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Star-Shaped Denoising Diffusion Probabilistic Models | Accept (poster) | Summary: This introduces the so-called Star-Shaped DDPM (SS-DDPM). Instead of using Gaussian forward diffusion step in a Markovian manner, the diffusion process is directly conditioned on the data, in order to construct the "true" reverse process (posterior), resulting in the star-shaped diffusion process. This opens the door for using other transition process as demonstrated in the paper.
Strengths: 1. This is a solid paper that I have reviewed so far. All the conclusions are backed by the theory.
2. The paper is well presented and easy to follow the main the idea with full materials in the supplementary
3. The idea presented in the paper is novel and may have wide applications.
Weaknesses: Frankly speaking, I like to see this paper to be accepted. I cannot form any weakness. The only point I may raise is it would be much much better for the authors to share their implementation source code.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: When using other transition distributions in the forward process, what is the strategy to actually diffuse the data (into noises)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Not much in this perspective. No comments
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the kind comments!
We will release the source code with the camera ready version of the paper.
Regarding your question:
In general, we follow the same intuition as when designing the forward process in DDPM. We start with low noise ($x_1$, $x_2$, ... should be reasonably close to $x_0$), and end with high noise ($x_T$, $x_{T-1}$, … should be almost pure noise, almost independent from $x_0$). And then we need to choose the probabilities such that the inputs to the denoising neural network (in DDPM it is $x_t$, and in our case it is the sufficient tail statistic $G_t$) gradually becomes more and more noisy. See Figure 5 for reference. To choose the transition probabilities somewhat automatically, we can refer to the rate at which the mutual information $I(x_0; G_t)$ changes with $t$, see Appendix 7: Choosing the noise schedule for details.
---
Rebuttal Comment 1.1:
Comment: Thanks | Summary: The paper introduces a new probabilistic structure for denoising models that does away with a Markov forward process. The authors derive the reverse process in terms of a sufficient statistic that allows for efficient reverse sampling. The form of the model is derived for a variety of noising distributions and the model is evaluated on a range of different data modalities.
Strengths: This work can potentially help many people in the generative modelling community because there has been a lot of work trying to extend corruption process to different data types e.g. discrete, manifold data and this paper has a general recipe that appears to work for a large chunk of these and more which could be really helpful when encountering non-standard data types. The example of the Wishart distribution on positive definite matrices is very cool.
I can foresee many extensions to this work, looking to bring more distributions into the framework, extending it to continuous time which could bring up a lot of interesting technical problems and broadening the class of possible distributions by introducing approximate sufficient statistics.
The paper has a good suite of experiments on a variety of real world practically relevant data such as text and images and the performance of the model seems good
Weaknesses: I think the paper could benefit from more intuitions and visualizations for the role of G. Initially, the model seems strange as x_t can jump a lot between x_t and x_{t+1} due to going to x_0 and resampling q(x_t | x_0). However, it seems that G instead is the smoothly changing statistic that is more analogous to the state in previous diffusion models. For example, I think figure 9 in the supplementary is quite interesting and could be moved to the main as it shows that a correctly normalized G looks very similar to the state in norm diffusion models. It would be good to have more intuition as to the role of G and how it is more closely linked to the 'x_t' of normal diffusion models.
Further, there is not really a experiment that really motivates the use of these more interesting distributions in a scenario where normal diffusion models really don't work at all. I appreciate that this is somewhat an open question at the moment in the literature more generally and the community has already proven an interest in adapting the corruption process to the data modality of interest so this may not be a big requirement.
Edit after rebuttal: I have read the authors rebuttal and appreciate the answers to my questions about the derivations and the FID scores. I intend to keep my score as it is.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In the proof of Theorem 1, how do you go from line (23) to line (24), as in, how do you know what q(G_t | x_0) is, for the numerator in (24)?
Figure 6 is confusing to me because the figure makes it seem like the model is better than DDPM at all NFE but in the text the best FID is stated to be 3.17 which is the same as the best FID from DDPM and so shouldn't Figure 6 show the method and DDPM at least overlapping at 10^3 sampling steps?
In DDPM, the Gaussian form of the reverse distribution is justified by considering the limit as the noise in each step is taken to 0 and the number of steps is taken to infinity. Is there any such justification for the Gaussian form that you assume in your model or is this a heuristic?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations of the framework e.g. restriction of the distributions and ability to find the sufficient statistics was nicely discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments! We would like to address your questions below.
**Weakness 1:**
We used Figure 5 to illustrate this point in the main text, however we should probably make this point more clear in the text. The close connection between $G_t$ in SS-DDPM and $x_t$ in DDPM is crucial to understanding the model. We would try to expand Figure 5, give more attention to Figure 9 or rewrite the text to make this more clear.
**Question 1:**
The first transition ((23) to (24)) is just using the normalization constraint. Since we have started with a distribution on $x_0$, we can drop all multiplicative terms that do not depend on $x_0$ and then renormalize. We have started with a conditional distribution $q(x_0 | x_{t:T})$, and were able to rewrite the density as a function of $x_0$ and $G_t$. This immediately means that the resulting distribution is a conditional distribution $q(x_0 | G_t)$, hence the second transition on line (24). Because of that we do not need to compute these distributions explicitly. We have unintentionally skipped over this step as it has previously seemed to be obviously true. However, we were not able to recall or find an existing lemma that establishes this fact (although surely it must exist somewhere), so we will include a more detailed proof. Thanks for pointing it out! Something along these lines should work:
**Lemma 1.** Assume $x$ and $y$ follow a joint distribution $p(x, y) = p(x)p(y | x)$, where $p(y | x)$ is defined as $f(y, h(x))$. We would like to show that $p(y | x)=p(y | z)|_{z=h(x)}$, where $z=h(x)$.
We can write down the joint distribution $p(x, y, z) = p(x)f(y, h(x))\delta(z - h(x))$. Then $p(y, z)=\int p(x, y, z) dx=$ $\int p(x)f(y, h(x))\delta(z - h(x)) dx =$ $\int p(x)f(y, z)\delta(z - h(x)) dx =$ $f(y, z)\int p(x)\delta(z - h(x)) dx=$ $f(y, z)p(z)$.
Finally, the conditional probability is $p(y | z)=\frac{p(y, z)}{p(z)}=f(y, z)$ — the same function.
In the case of line (24) $x_0$ is $y$, $x_{t..T}$ is $x$ and $G_t$ is $z$.
**Question 2:**
Unfortunately, the original DDPM paper did not include the results on shortened generation (FID vs NFE). Therefore, we used the results reported by Nichol&Dhariwal 2021 (Improved DDPM), where the final performance of DDPM was slightly worse. This discrepancy is important, which is why we only conclude that SS-DDPM achieves a comparable performance rather than strictly outperforms DDPM.
**Question 3:**
In our model we use the same distribution for the reverse process as we do for the forward process (e.g. Beta distribution for Beta SS-DDPM, etc — we define it in eq. 20). This is a natural choice because different distributions from the exponential family have different supports, and choosing other distributions would likely make the involved KL divergences infinite. Also, when these distributions are kept the same, it is usually possible to compute the KL divergence analytically and simplify it (see Table 3 in the end of the Appendix). However, you are right that this is just an approximation, and in general other distributions could make sense too. For example, mixtures could be a good candidate to consider: the “true” reverse distribution in eq. (19) is essentially an infinite mixture of these distributions. The same argument could be made for Gaussian discrete-time DDPMs too.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I understand now how to get to line (24), I think just the trick of renormalizing wrt x_0 is what I was missing. For the FID results, my worry was that the DDPM results might have been calculated with 10K samples whereas others with 50K and when you use less samples in FID it biases the result upwards. But it seems in Nichol&Dhariwal 2021, they were using 50K for this plot so I think the results still stand. I will keep my score as it is currently. | Summary: This paper proposes a star-shaped diffusion probabilistic model, which is non-Markovian, and more like an autoregressive way of predicting intermediate states xt. Specifically, the authors define a forward process q(xt|x0) that each intermediate state xt is directly related to the initial state x0. The reverse process is defined by p(xt|xt, xt+1, ..., xT). To efficiently solve p(xt|xt, xt+1, ..., xT), the authors propose a compressive representation Gt(xt, xt+1, ..., xT) to extract all the information of x0 contained in history states {xt+1, ..., xT}, so the modified distribution becomes p(xt|Gt). The training and inference go in a similar way as DDPM. Simple experiments show that the proposed method performs better than the baseline methods MTD and DDPM.
Strengths: 1. This paper provides a novel view of the definition of diffusion models and is very inspirational. It is worth thinking about whether the Markov process is the best design choice considering the fact that the whole history {xt, xt+1, ..., xT} contains more information than a single state xt.
2. The author proposed a novel diffusion model with basic evaluations, which may inspire the community for further exploration.
Weaknesses: 1. The presentation needs to be improved: (1) T, Ωt, and ht in Eq.13 lack interpretation; (2) It is not well proved that Gt covers all the information of x0 in {xt, xt+1, ..., xT}. Although the authors mention that there are explanations in the supplementary material, you need to ensure the reader has a basic understanding by just reading the paper.
2. The advantage and application value of the proposed method is not clear. For example, can you provide a practical case that DDPM can not address while SS-DDPM can?
3. The experiments are too simple and not persuasive. To prove the advantage of SS-DDPM, it is encouraged to compare with state-of-the-art methods on diverse benchmarks,
4. As the authors said in line 128, "In general case the dimensionality of Gt would grow with the size of the tail"
4. There is no evidence that SS-DDPM can apply acceleration methods like DDIM.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: I doubt the reverse process of q(xt|x0) should be p(x0|xt), can you explain why the reverse process is p(xt|xt, xt+1, ..., xT)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: The scope of this paper is interesting, but it needs more time to polish the theory, presentation, and experiments. Considering its current form, I think this paper does not meet the acceptance criteria of NeurIPS. However, if there is enough evidence to show the value of this paper, I may also change my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments! We would like to address your concerns and questions below.
**Weakness 1 (1):**
Eq. 13 is a standard definition of the exponential family of distributions with a standard notation. $\mathcal{T}(x_t)$ is the sufficient statistic, $\eta(x_0)$ is the natural parameter, $h(x_t)$ is the base measure and $\Omega(x_0)$ is a normalization constant, sometimes called a (log)partition function. We will improve this section.
**Weakness 1 (2):**
We formulate this property as Theorem 1 and prove it in Appendix 3. It is a technical proof, and it is common practice to only include the proofs in the Appendix. The intuition behind this theorem is the same as behind the original PKD theorem. We generalize it for the non-i.i.d. case, which is why we have to add an additional constraint (linearity of the parameterization). Strictly speaking, we prove the following equality: $q(x_0 | x_t, \dots, x_T) = q(x_0 | G_t)$, which means that “$G_t$ covers all the information of $x_0$ in $x_t,\dots, x_T$”.
**Weakness 2:**
When the data or its parts lie on a manifold, its density in the original space would be degenerate, and DDPM would fail to recover it. Even if DDPM would produce good-looking samples, its density estimation would be unreliable. On the contrary, SS-DDPM with an appropriate distribution would both produce good-looking samples and a reliable estimate of the density. For example, data like molecular graphs could combine categorical data (atoms), spherical data (orientations) and positive data (distances).
**Weakness 3:**
At this stage our goal was to demonstrate that the model can be successfully applied with a variety of noising distributions rather than perfecting the model for each individual task or finding the best task for the model. Comparing with domain-specific state-of-the-art methods would require ad-hoc task-specific modifications and a very expensive hyperparameter search to make the comparison fair. These more challenging settings are something we would like to explore in the future.
**Weakness 4:**
In this paper we only consider distributions from the exponential family. In that case we provide a simple recipe for constructing a sufficient tail statistic $G_t$ of fixed dimensionality. Although the general case is beyond the scope of our paper, we suspect that in practice adequate statistics of fixed size would be enough, even if they are not strictly sufficient statistics.
**Weakness 5:**
This paper is focused on establishing the base SS-DDPM model. Since SS-DDPM and DDPM are closely connected, we suspect that it would be possible to adapt other modifications of DDPM to SS-DDPM as well (e.g. DDIM, DDRM, connection to SDE). However, these extensions are beyond the scope of this paper.
**Question 1:**
We discuss and illustrate it in detail in Appendix 2. The short answer is that we need to look at the forward and reverse processes as a whole, not at inverting individual steps.
Longer answer: Diffusion models model the whole set of variables $x_0, x_1, \dots, x_T$ by learning a joint distribution $p_\theta(x_0, x_1, \dots, x_T)$, called the reverse process, to approximate a fixed joint distribution $q(x_0, x_1, \dots, x_T)$, called the forward process. When the forward process is Markovian, i.e. it factorizes as $q(x_0)\prod q(x_t|x_{t-1})$, we can also rewrite it in reverse as $q(x_T)\prod q(x_{t-1}|x_t)$, and, therefore, it makes sense to approximate it with a Markovian reverse process $p_\theta(x_T)\prod p_\theta(x_{t-1}|x_t)$ (by using the same factorization and sufficiently flexible distributions, we can in theory reduce the approximation gap to zero). However, when the forward process is star-shaped, i.e. it factorizes as $q(x_0)\prod q(x_t|x_0)$, we can only write it in reverse as $q(x_T)\prod q(x_{t-1}|x_t, x_{t+1}, \dots, x_T)$. We could still try to approximate it with a Markovian reverse process $p_\theta$. It was the first thing we tried, and it fails miserably: see Figure 3 for details. In Appendix 2 we show that such approximation introduces a huge irreducible approximation gap which grows with the number of diffusion steps. Because of that we need to use the reverse process with the full dependency structure $p_\theta(x_T)\prod p_\theta(x_{t-1}|x_t, x_{t+1}, \dots, x_T)$. Fortunately, when the noising distributions come from the exponential family, we can rewrite it as a Markov process over sufficient tail statistics $p_\theta(x_T)\prod p_\theta(x_{t-1}|G_t)$, where $G_t=G(x_t, \dots, x_{t+1})$, so the procedure is still computationally efficient.
---
Rebuttal Comment 1.1:
Comment: Thanks for your answer which partly solved my concern. I will slightly raise my rating to borderline accept. | Summary: This paper proposes a non-Markovian diffusion model named the star-shaped diffusion model that generates a sequence of noised image from the original image in the forward process.
This paper studies the theoretical foundation of such new type of model showing that if the forward process is based on a subset of the exponential family of distribution, a tail statistics is sufficient statistics for the reversed process. Based on such a foundation, the reversed process can be conducted.
Strengths: The overall structure of this new diffusion model is new, novel and interesting.
The application of PKD theory is sound.
Weaknesses: Although the model is pretty novel and new, there is a lack of motivation from practical problems -- the experiment are weak and kind of toy. A showcase of the unique application of this model is appreciated.
Lack of comparison of some other non-markovian diffusion on constrained domains such as [1,2]
[1] Learning Diffusion Bridges on Constrained Domains
[2] First Hitting Diffusion Models for Generating Manifold, Graph and Categorical Data
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Is there any ODE-version of this SDE-based model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments! We would like to address your questions below.
**Weakness 1:**
At this stage our goal was to demonstrate that the model can be successfully applied with a variety of noising distributions rather than perfecting the model for each individual task or finding the best task for the model. In general, we would expect non-Gaussian SS-DDPM to outperform Gaussian DDPM when data (or parts of data) lies on manifolds that naturally support distributions from exponential families, e.g. hyperspheres, p.s.d. matrices, simplexes, etc. For example, data like molecular graphs could combine categorical data (atoms), spherical data (orientations) and positive data (distances).
**Weakness 2:**
Thanks for pointing out “Learning Diffusion Bridges on Constrained Domains”, we missed it! While we don’t have any experiments that overlap with that paper on hand, we would expect Categorical SS-DDPM to perform similar to D3PM. The results could probably be improved by tuning the noising schedule to the particular tasks. Same point holds for the categorical data experiments with FHDM. Regarding FHDM on geodesic data: we achieve a similar performance on the fire dataset (our result is $-1.26 \pm 0.14$, FHDM reports $-1.24 \pm 0.08$). We will conduct the experiments on other geodesic datasets and add a comparison.
**Question 1:**
Strictly speaking, for now there is no known direct connection between SS-DDPM and SDE-based models beyond the Gaussian case. However due to the stochasticity in the reverse process, SS-DDPM is indeed similar to SDE-based methods. Due to its close connection to DDPM, we suspect that it should be possible to adapt the DDIM model to SS-DDPM. DDIM is a deterministic version of DDPM that is close to ODE versions of SDE-based models. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents the Star-Shaped DDPM (SS-DDPM), a general recipe for designing a diffusion model with a noising process lying in a general subset of the exponential family. With a Gaussian noising process, SS-DDPM recovers the DDPM. Diverse experiments on synthetic and practical image and text datasets demonstrate the effectiveness of the proposed SS-DDPM.
Strengths: (1) A general recipe for designing a diffusion model, termed Star-Shaped DDPM (SS-DDPM), is proposed.
(2) The recipe is derived and analyzed in detail, with convincing statistical justifications.
(3) Diverse experiments are conducted to demonstrate the effectiveness of the SS-DDPM.
Weaknesses: (1) Many technical details are given in the supplementary material.
(2) The advantages and disadvantages of the SS-DDPM over the DDPM are not explicitly discussed.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: What are the limitations of the proposed SS-DDPM, when compared with other diffusion models?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments! We would like to address your questions below.
**Weakness (1):**
We have tried to summarize the most crucial technical details in Section 3. Due to space limitations, we had to put more details in the appendix. Unfortunately, the NeurIPS template makes it more difficult to directly reference the appendix (supplementary material has to be uploaded as a separate file), so we hope that the upcoming arXiv version would be easier to read. Please let us know if particular sections stand out as being confusing, and we would do our best to clarify them in the next revision.
**Weakness (2) and Question (1):**
SS-DDPM is a direct generalization of DDPM, so many properties of the models are shared. The main advantage of SS-DDPM is being able to use other noising distributions which may be more appropriate to the particular task. The main disadvantage of non-Gaussian SS-DDPM is that it loses many interesting extensions of DDPM like DDIM, DDRM and the connection to SDEs. We feel like it should be possible to adapt these extensions to support SS-DDPM, so it is an interesting opportunity for future work. Also, we found it to be more difficult to come up with sensible noising schedules for SS-DDPM. SS-DDPM is rather sensitive to the noising schedule, so it would be nice to have a better way to choose (or learn) noising schedules. In this work we try to reuse the schedules used by existing models by matching the mutual information between the target clean data and the noisy input of the denoiser. Finally, in non-Gaussian SS-DDPM sampling the tail statistic $G_t$ requires sampling the whole tail $x_t, x_{t+1}, …, x_T$. This is not a big deal in most applications, as sampling these datapoints does not require evaluation of neural networks, but it is still worth noting. In practice it doesn’t increase the training time as DNN evaluation is still the bottleneck.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses. I will increase my rating from 7 to 8. | Summary: This paper proposed a star-shaped denoising diffusion probabilistic model (DDPM), which extends DDPM to non-Gaussian noises. As a result, the backward/generative process requires conditioning on tails. The authors then propose an efficient tail conditioning strategy which works when the forward process follows an exponential family with linear parameterization. Duality between star-shaped and Markov diffusion processes are also established which provides theoretical support for star-shaped DDPM for its ability to go beyond Gaussian noises. The effectiveness of star-shaped DDPM is demonstrated on several experiments.
Strengths: 1. This paper is written very clearly and well organized.
2. It also provides a general framework for DDPM that can incoporate no-Gaussian noises, where previous efforts are designed for specific noises and not generalizable to other distributions.
3. Duality between star-shaped and Markovian diffusion is established, which shows SS-DDPM and vanilla DDPM are equivalent in the Gaussian case.
Weaknesses: 1. While experiments on synthetic data demonstrate the ability of SS-DDPM to deal with non-Gaussian noises, the effectiveness of SS-DDPM on real data needs further verification. The current experiment on real data seems a bit inadequate.
2. Star-shaped denoising diffusion models have been proposed before (Rissanen et al. (2022)), and there is no comparison to this previous method.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. How does DDPM perform on the Geodesic data in the experiments?
2. How much improvement can SS-DDPM have over DDPM on real data? The advantage over DDPM seems a bit tiny based on current evaluation. Also, in which case would non-Gaussian noise be useful for DDPMs?
3. Rissanen et al. (2022) also proposed a similar star-shaped DDPM, where the reverse process is Markovian based on the heat equation. How does your method compare to their's?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, they do.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments! We would like to address your questions below.
**W1 and Q2:**
DDPM is a special case of SS-DDPM, so it is safe to expect SS-DDPM to perform at least as good as DDPM. At this stage our goal was to demonstrate that the model can be successfully applied with a variety of noising distributions rather than perfecting the model for each individual task or finding the best task for the model. Careful tuning of the noising schedule and other improvements to the training procedure (e.g. resampling timestamps using importance sampling, more flexible reverse distributions, more architectural improvements, etc) could be used to improve the performance further, but the added hyperparameters would require a careful ablation and make the experiments a lot more expensive. In general, we would expect non-Gaussian SS-DDPM to outperform Gaussian DDPM when data (or parts of data) lies on manifolds that naturally support distributions from exponential families, e.g. hyperspheres, p.s.d. matrices, simplexes, etc. For example, data like molecular graphs could combine categorical data (atoms), spherical data (orientations) and positive data (distances).
**W2:**
IHDM heavily relies on blurring rather than adding noise, resulting in very different diffusion dynamics. The small amount of noise (std=0.01) is there for regularization and to prevent the probabilistic model from being degenerate. Due to blurring and low noise there is a lot of information shared between consecutive timestamps, making both the forward and the reverse processes essentially Markovian. While the ELBO looks similar due to a similar factorization of the forward process, the underlying model is very different: our model is much closer to DDPM than to IHDM. We will expand on these differences in the next revision.
**Q1:**
We have evaluated the NLL on the fire dataset. Our result is $-1.26 \pm 0.14$. For comparison, the results reported in Riemannian Diffusion Models (Huang et al 2022) are $-1.38 \pm 0.05$ for RDM, $-1.24 \pm 0.07$ for RSDE (Riemannian Score-Based) and $0.28 \pm 0.2$ for DDPM (Stereographic SDE). Our model performs similar to RSDE and RDM and outperforms DDPM.
**Q3:**
IHDM achieves 18.96 FID on CIFAR-10, while our Beta SS-DDPM achieves 3.17 (IHDM uses 200 steps and we use 1000; at 200 steps our FID is 3.45). Also, while it looks like IHDM can use any noise distribution, it is not trivial to generalize it to other domains: one would need to come up with an appropriate notion of blurring.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, and I will keep the score. | null | null | null | null |
LayerNAS: Neural Architecture Search in Polynomial Complexity | Reject | Summary: The paper introduces LayerNAS, which is a method for neural architecture search (NAS). The idea is to reduce the computational cost of NAS, which is exponential in the number of layers. As such, the paper introduces a layerwise search option with the idea being that the current layer can be directly determined based on the results of the previous layers. This enables polynomial computational complexity. The paper goes one step further and establishes the inclusion of the cost-constraints, e.g. in eq. 4. LayerNAS is then empirically validated in ImageNet and in a standard benchmark of NAS, which includes cifar10, cifar100 and Imagenet16 datasets.
**Post rebuttal**: I appreciate the effort by the authors and their numerical evidence. I would urge the authors to include the references and the discussion in the camera-ready version. I would strongly encourage the authors to also include the numerical result on the cost per layer (see answer 2 in the original responses) and the additional results in transfer learning. One thing that I believe should be explained better in the main paper is the reasoning for the separation from larger architectures, e.g. the somewhat arbitrary 600 MAdds.
Strengths: The reduction of the computational cost of NAS is an important aspect that makes the paper relevant for the NeurIPS community. Besides, a number of papers on NAS are published in NeurIPS and related conferences (**relevance**). The solution proposed for the layerwise search has appeared before, but the paper makes a complete framework to support the idea and empirically validates the framework. In addition, the paper makes a clear statement of the limitations and the assumptions that led to those (**clarity**). This enables the research community to extend this paper further.
Weaknesses: I am not sure what the novelty of the proposed method is, this is not clearly stated at the moment.
I find that the train-free methods of NAS are more important for the story and as alternative methods than the two lines devoted to the end of the related work. Having said that, I do understand that they do not cover the contribution of this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Overall, the overall writing is clear to me, but I have few questions:
1) What is the difference in the obtained architectures depending on the cost set as threshold? I think this insight is very important and should be included in the main paper.
2) How is the cost distributed over the layers? Are the first layers more costly in the end or is it uniformly distributed? This might be one of the drawbacks of this heuristic approach of selecting/focusing on one layer at a time.
3) How would the proposed architectures perform on transfer learning? I believe this is an important experiment, since NAS is not sota in ImageNet (current methods surpass the 85% accuracy), BUT NAS can be tremendously useful in transferring to tasks with less annotated data or different tasks.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are explicitly identified in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **"What is the difference in the obtained architectures depending on the cost set as threshold?"**
It is difficult to interpret the detailed architectural differences between LayerNAS and other models. However, in general, I found that LayerNAS models have more base filters and fewer expanded filters than baseline models.
2. **"How is the cost distributed over layers"**
As an example, let's consider the MobileNetV3-Large search space. Here are the minimum and maximum MAdds values for different search options per layer:
|max_cost|min_cost|
|--|--|
|6.63 | 4.62|
|47.27 | 21.22|
|47.06 | 12.19|
|25.8 | 9.53|
|33.33 | 8.58|
|33.33 | 8.58|
|32.87 | 13.34|
|28.92 | 7.73|
|26.61 |9.81|
|26.61 |9.81|
|81.94 |27.8|
|131.83 |43.11|
|84.21 |28.93|
|66.67 |20.56|
|66.67 |20.56|
|17.87 |7.56|
Later layers tend to have more channels, parameters, and MAdds. We found that different layers contribute differently to the final model performance. We tried sorting layers by $\Delta acc / \Delta cost$, and searching for important layers first. However, this was computationally expensive and did not always lead to better performance.
3. **"How would the proposed architectures perform on transfer learning?"**
We replace backbone in RetinaNet with LayerNAS model backbones, train and evaluate on CoCo dataset, and find out LayerNAS can consistently perform better. Here are results:
|Model|ImageNet top-1 acc|CoCo mAP|# Params|MAdds
|--|--|--|--|--|
|MobileNetV2|72.0|22.1|3.5M|300M|
|LayerNAS w/o SE|77.1|23.85|7.6M|598M|
|LayerNAS|78.6|24.84|9.7M|527M|
|MobileNetV3-Small|67.4|16|2.5M|56M|
|LayerNAS|69.0|17.94|3.7M|61M|
|MobileNetV3-Large|75.2|22.0|5.4M|219M|
|LayerNAS|75.6|23.75|5.1M|229M|
4. **Novelty**
Novelty of LayerNAS:
* **Formulation as a combinatorial optimization problem.** : this is a novel formulation that hasn’t been proposed before.
* **Search space complexity**: This might be the first work that proposes a polynomial complexity search space. We address a key challenge in NAS, which is the large and complex search space.
* **Derive a rigorous method**: Most NAS algorithms do not explicitly express the assumption of their methods. We derive LayerNAS in a rigorous approach that we understand how and why LayerNAS works.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Dear authors,
I am thankful for your responses. Studying the rebuttal, I have few questions on the novelty and the corresponding claims:
1. The text mentions "we understand how and why LayerNAS works". I admit I am a bit uncertain about the objective here. Is this a convex objective such that we have guarantees of how and why LayerNAS works? Is there I am missing?
2. What does the term "polynomial complexity search space" mean?
3. The rebuttal claims that "We address a key challenge in NAS, which is the large and complex search space.". This is great, but I would assume that would mean we are able at last to outperform neural networks with predetermined architectures (e.g. CNNs or transformers) on ImageNet, but this does not seem to be the case. Could the authors elaborate on why this is happening?
---
Reply to Comment 1.1.1:
Comment: Thanks for the valuable discussion.
1. We rigorously formulate the NAS problem as a combinatorial optimization problem. LayerNAS conduct search on a complete search space from combinatorial optimization, which allows us to understand how it works. However, this does not guarantee an optimal solution, as there are many practical details that can affect the performance of the search, such as variability in training accuracy and the correlation between proxy job accuracy and model performance
2. To thoroughly search with LayerNAS, upper bound of num of search trials is $O(HSL)$, where $H$ is a constant, $S$ is num of search options per layer, $L$ is num of layers of the model. Many NAS works have this upper bound as $O(S^L)$
3. The search space in our work is a chain-structured search space from MobileNet backbone, according to the taxonomy from [this survey](https://arxiv.org/abs/2301.08727). ImageNet experiments in recent literatures often incorporate multiple techniques at the same time, such as [feature/label augmentation](https://openaccess.thecvf.com/content/CVPR2021/html/Pham_Meta_Pseudo_Labels_CVPR_2021_paper.html), [distillation](https://openaccess.thecvf.com/content_CVPR_2020/html/Xie_Self-Training_With_Noisy_Student_Improves_ImageNet_Classification_CVPR_2020_paper.html), [pre-training](https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/3665_ECCV_2020_paper.php) etc, which are out of scope for our current paper. NAS is an important tool for finding better model architectures under 600M MAdds. In our experiments, we compared LayerNAS with many other NAS works in the category of chain-structured search space and showed that LayerNAS outperforms other methods under different MAdd constraints. | Summary: The paper tries to overcome a drawback of Neural Architecture Search (NAS), an enormous search space that hard to traverse whole space to design a well-optimized network.
From an assumption that a previous layer in a network doesn’t affect the subsequent layers, the paper converts multi-objective NAS to a combinatorial optimization problem.
With the proposed method, the paper designs optimized networks layer by layer, unlike other works that design the whole network simultaneously. It leads to reducing the search complexity of NAS to polynomial complexity.
Strengths: - Writing is easy to follow.
- The paper compares the proposed method with other methods fairly with NATS-Bench.
Weaknesses: - The proposed method that removes networks from search space by their costs is not novel.
- The search cost of the proposed method is still worse than one-shot NAS.
- The paper naively analyzes the cost of searched networks with MAdds, omitting other metrics such as energy consumption or latency, etc.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Is it sure that searched networks with the proposed method aren’t suboptimal solutions? For example, if the proposed method is applied to every candidate of the first layer, is the network searched with the proposed method from scratch the best solution?
- According to section 5.1, searching the shared base filters first, and sequentially searching kernel sizes and expanded filters leads to decoupling preceding and succeeding layers. Can supports for this statement be provided?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper refers to several limitations of the proposed method in Conclusion and Future Work. However, a thorough analysis is needed in terms of cost, too.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **"Is it sure ... aren’t suboptimal solutions?"**
LayerNAS traverses on a finite complete search space, which significantly aids in avoiding suboptimal results.
Assume we are searching for the optimal model $s_1..s_n$, and we store all possible model candidates on each layer. During the search process on $layer_n$, we generate model architectures by modifying $o_n$ to other options in $\mathbb{S}$. Since we store all model architectures for $layer_{n-1}$, the search process can create all $|\mathbb{S}|^n$ candidates on $layer_n$ by adding each $s_n \in \mathbb{S}$ to the models in $\mathbb{M}_{n-1}$. Therefore, $\mathbb{M}_n$ contains all possibilities in the search space. This process can then be applied backward to the first layer.
Nevertheless, it is important to note that LayerNAS cannot guarantee the optimal result due to various practical details involved in the process.
**a. Grouping method and resolution**
In cases two candidates have similar costs, LayerNAS retains only the one with higher accuracy, potentially leading to the omission of the optimal candidate within that grouping.
**b. Variability in Training Accuracy**
The training process does not ensure consistent accuracy with each iteration. For further details on this matter, please refer to Appendix F.
**c. Correlation between Proxy Job Accuracy and Model Performance**
We utilize proxy jobs to assess the quality of candidates. However, it's important to note that this correlation is not always reliable. Some candidates may be disregarded due to their lower proxy accuracy.
**"… can supports of this statement be provided?"**
Let's define a model layer with $b$ filters and $e$ expanded filters as $L(b, e)$. Assuming that for any given $b_1 > b_2$, $L(b_1, e)$ performs better than $L(b_2, e)$, and for any $e_1 > e_2$, $L(b, e_1)$ performs better than $L(b, e_2)$. When we first search for $b$, we can obtain multiple models. Then, when we search for $e$, we can modify $e_0$ to explore other options in the next layer search, ensuring that the change in $e$ does not affect the choice of $b$. However, if we simultaneously search for $L(b, e)$, certain subsequent layers must share the same $b$ to construct residual shortcuts, leading to a dependency between these following layers and the previous ones.
**Novelty**
We acknowledge that previous works, such as NGSA-net, LEMONADE, have split cost and accuracy into two metrics and efficiently searched multi-objective NAS problems. Nevertheless, our major contribution and novelty lie in achieving polynomial complexity on search spaces, a significant advancement that sets LayerNAS apart from these methods. This novelty is also recognized by other reviewers.
* **Formulation as a combinatorial optimization problem.** : this is a novel formulation that hasn’t been proposed before.
* **Search space complexity**: This might be the first work that proposes a polynomial complexity search space. We address a key challenge in NAS, which is the large and complex search space.
* **Derive a rigorous method**: Most NAS algorithms do not explicitly express the assumption of their methods. We derive LayerNAS in a rigorous approach that we understand how and why LayerNAS works.
**"The search cost is still worse than One-shot NAS"**
Search cost and search complexity are not the same. The search cost can be defined as the product of num_trainable_params and epochs. Let's consider a model with $L$ layers, each layer offers $S$ options, each option contains average of $k$ trainable parameters, training for $t$ epochs, resulting in a training cost of O(tKL). For one-shot NAS, a supernet is constructed, containing $S$ times more trainable parameters in a single training trial. Therefore, the search cost of one-shot NAS is O(tkSL).
LayerNAS requires O(HSL) search trials. Each trial trains for $t' = t / 20$ epochs. Since LayerNAS only searches one layer per trial and can load checkpoint weights from the previous trial with the same preceding search choices, the actual number of trainable parameters in a trial can be reduced to O(k) instead of O(kL). As a result, the overall search cost of LayerNAS is O(t'kHSL). When the hyperparameter H is set to 20, the search cost becomes equivalent to that of OneShot-NAS. However, if a higher value for H is desired to achieve better solutions, the search cost of LayerNAS will increase linearly.
Furthermore, LayerNAS exhibits several comparative advantages:
* **Quality**: LayerNAS consistently discovers superior models.
* **Larger Search Space**: LayerNAS can theoretically accommodate an unlimited number of options per layer. In contrast, OneShot NAS is constrained by the size of the supernet.
* **Stability**: In practice, some OneShot NAS approaches may encounter challenges in achieving convergence, while LayerNAS demonstrates enhanced stability doe not have this issue.
* **Adaptability to Smaller GPUs**: As a multi-trial NAS approach, LayerNAS can work effectively as long as the model fits to GPU, while Oneshot NAS requires larger TPU topologies to fit the supernet.
However, it's important to note that LayerNAS is not exclusive from One-shot NAS. It can be employed in conjunction with OneShot to initially find a coarse architecture and then perform a more refined and granular search using LayerNAS to achieve better overall performance.
**"The paper naively analyzes the cost of searched network with MAdds, omitting other metrics such as energy or latency, etc"**
We completely agree that metrics related to real devices, such as energy consumption and latency, offer a more accurate reflection of model performance. LayerNAS is designed to be flexible to support these cost metrics. However, when comparing our results with previous works, to ensure a fair comparison, we present MAdds results in this context. As LayerNAS primarily focuses on proposing new NAS methods, we believe these experiments adequately demonstrate the effectiveness of our approach.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their responses. After reading the authors' comments, I agree that executing NAS in polynomial complexity deserves to be researched further. Therefore, I decide to raise my score to 5. | Summary: The authors propose LayerNAS with polynomial complexity. Namely, this work transforms the multi-objective NAS problem into a Combinatorial Optimization problem with proper assumptions.
LayerNAS is benchmarked against recent NAS arts on ImageNet classification task, as well as on dedicated NATS-Bench in terms of quality, stability, and efficiency.
Algorithm details and searched architectures are provided, which might benefit the community.
Strengths: 1. The paper is clearly written and easy to follow. The detailed algorithm and searched architectures are provided, which makes the results replicable.
2. Extensive results are reported on NASBench-101, and the results are promising.
3. The limitation of the assumptions are properly discussed.
4. Performing NAS in polynomial complexity is of good research impact and real-world applications.
Weaknesses: 1. There are some format flaws, e.g., the abstract should be a single paragraph.
2. Table 2 is not informative enough. It would be better to include more details such as training epochs, augmentations, whether distillation is utilized or not, etc. for comparison. Plus, sometimes it is hard to tell whether LayerNAS is better than previous arts when MAdds or Params are not aligned (e.g., LayerNAS has 50% more params than MobileNetV3-Small). I wonder if it is possible to strictly align both params and MAdds and compare accuracy, as anyway, LayerNAS is a multi-objective search.
3. I wonder if there are latency-driven search results.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. **"Format flaws"**
Thanks for the comment. We have revised the abstract into a concise single paragraph.
2a. **"Table 2 is not informative enough"**
We have endeavored to provide more information in Table2.
| Model | Top1 Acc. | Params | MAdds | Training Epochs | Data Augmentation |
|--|--|--|--|--|--|
|MobileNetV3-Small | 67.4 | 2.5M | 56M |
|MNasSmall | 64.9 | 1.9M | 65M |
|LayerNAS | 69.0 | 3.7M | 61M | 500 |Same a MBV2|-|
|MobileNetV3-Large | 75.2 | 5.4M | 219M |-|-|
|LayerNAS | 75.6 | 5.1M | 229M | 500|Same as MBV2|
|MobileNetV2| 72.0 | 3.5M | 300M |
|ProxylessNAS-mobile| 74.6 | 4.1M | 320M | 600|No|
|MNasNet-A1 | 75.2 | 3.9M | 315M |
|FairNAS-C w/o SE | 74.7 | 5.6M | 325M | -|Same as MBV2|
|LayerNAS w/o SE|75.5 | 3.5M | 319M | 500 |Same as MBV2|
|EfficientNet-B0 | 77.1 | 5.3M | 390M |350|AutoAugmentation|
|SGNAS-B | 76.8| - | 326M |
|FairNAS-C w/o SE| 76.7 | 5.6M | 325M | -|Same as MBV2|
|GreedyNAS-B | 76.8 | 5.2M | 324M |
|LayerNAS | 77.1|5.2M|322M| 500 |Same as MBV2|
|MobileNetV2 1.4x|74.7|6.9M|585M|-|-|
|ProgressiveNAS|74.2|5.1M|588M|600|-|
|Shapley-NAS|76.1|5.4M|582M|600|-|
|MAGIC-AT|76.8|6M|598M|
|LayerNAS w/o SE|77.1|7.6M|598M| 500|Same as MBV2|
|LayerNAS|78.6|9.7M|627M| 500|Same as MBV2|
We intentionally excluded methods with distillation from our results because it is difficult to isolate the contribution of distillation from NAS. LayerNAS consistently outperforms methods without distillation.
2b. **"strictly align both params and MAdds and compare accuracy"***
Thank you for the comment. In order to ensure a fair comparison with other existing works, we conducted the search using MAdds as the cost metric. We would like to emphasize that LayerNAS is designed to be flexible and can accommodate multiple cost metrics. With a simple modification, LayerNAS can be applied to address two different constraints:
```python
c_1 = cost_1(M_l)
c_2 = cost_2(M_l)
update M_{l+1,c_2,c_2} if M_l has better performance
```
3. **"Latency-driven search result"**
We appreciate your valuable comment. We agree that real device latency is a superior metric for evaluating model performance. In our study,our study, we focused on conducting NAS on MAdds to facilitate a fair comparison with existing popular works in the field. We have not yet had the opportunity to conduct a new round of search based on latency. We will report the findings when ready. | Summary: This paper propose a novel approach of breaking down NAS problem into a Combinatorial Optimization problem.
Strengths: The paper is well-written and easy to follow. The authors provide clear explanations and examples throughout the paper.
Breaking down the search problem into a Combinatorial Optimization problem seems novel and interesting, and reducing the search cost to polynomial time, which is clearly a breakthrough to the research community.
LayerNAS can be applied to operation, topology and multi-objective NAS search
Results on ImageNet seems to surpass state-of-the-art methods by a clear margin, evidencing their effectiveness of LayerNAS.
Weaknesses: I do not particular have a question, this paper seems to be easy enough to follow.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are honored to have our work recognized by you. Thank you very much! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Diverse Shape Completion via Style Modulated Generative Adversarial Networks | Accept (poster) | Summary: This paper proposes a new conditional generative network that can produce diverse completions of a partially observed point cloud. The stochasticity is introduced via style modulation. A style code is learned to explicitly carry shape category information leading to better completions. Moreover, diversity penalties and discriminators at multiple scales are also set to prevent conditional mode collapse. Experiments show that the proposed framework can achieve significant improvements in respecting the partial observations while obtaining greater diversity in completions.
Strengths: The results on different datasets show the effectiveness of the proposed method.
Weaknesses: 1. The main target of this paper is to achieve diverse synthesis. However, no such visual samples are provided. The authors should provide the diverse synthesis for different categories in the main paper.
2. How to decide the optimal values for loss weights in Eq. 7? Although the authors claim that the same loss weight setting can lead to good results on all experimental datasets, the experimental datasets only contain several categories. More categories should be considered into analysis.
3. The idea of this paper is derived from the 2D StyleGAN. The difference about the idea should be clearly indicated.
4. The method has not compared with SOTA methods that use the diffusion model, like [30, 31]. And this method should also support unconditional point cloud synthesis?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please answer the questions in the weakness section.
Moreover, can this method be utilized into the point cloud completion of scenes? Like ScanNet dataset.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Further considering point cloud completion with text conditions can help to achieve open-world and diverse synthesis effects, which can improve the contribution of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Ahkf for their valuable and constructive comments.
**No visual samples of diverse synthesis are shown**
Qualitative comparisons to other methods were included in Figures 1, 4 and 5 in our main paper. In each of these figures we show 3 completions produced for the same partial input. We additionally shared more visual examples in our supplemental material. We will add a sentence to our main paper mentioning that more visual examples can be found in our supplementary material.
**How did you decide optimal values for loss weights? More categories should be considered for analysis.**
The loss weights were set by initially setting them all to a value of 1. We found this led to overfitting to the single ground truth completion and less diverse completions, thus we decreased the weight of our completion loss and increased the weight of our diversity penalty, which led to more satisfactory results.
In terms of the categories chosen, we follow previous works (cGAN [8], IMLE [9], PVD [10], ShapeFormer [13]) and evaluate on the chair, table, and airplane categories of the 3DEPN dataset and the chair, table, and lamp categories of the PartNet dataset. To further validate our method we have performed additional experiments on the shoe, toys, and consumer goods categories of the Google Scanned Objects dataset.
**Idea is derived from 2D StyleGAN [54], the difference should be clearly indicated.**
Note that the only idea we use from StyleGAN is weight modulation as a way to inject style codes into our features. 2D StyleGAN is focused on unconditional 2D image generation while our work is on diverse 3D shape completion. StyleGAN starts from a learned constant tensor while our work starts from a set of features extracted from a partial shape. Additionally, in StyleGAN, style codes start from random noise and are mapped to style space through a mapping network while in our work we extract style codes during training from complete shapes. Please refer to lines 151-158 in our paper for a discussion on StyleGAN's style code versus ours as well as lines 180-185 where we mention partial input features and learned style codes as input to our style modulation.
**Method does not compare against SOTA diffusion models [30, 31]. Does the method support unconditional point cloud synthesis?**
The works by Luo et al. [30] and Zeng et al. [31] are SOTA methods for the task of unconditional 3D shape generation while our work is focused on the task of diverse point cloud completion. Their methods do not accept a partial point cloud as input and hence cannot be compared on the task we're trying to solve.
We have compared against Point-Voxel Diffusion (PVD) by Zhou et al. [10] which is a diffusion based method that can perform diverse shape completion. We show our method outperforms PVD quantitatively and qualitatively in Table 2, Table 3, and Figure 5 of our main paper as well as Figure 3 of our supplemental. We also find our method runs 500 times faster than PVD.
Our method does not directly support unconditional shape generation as our seed generator expects partial shape features as one of its inputs. The seed generator could be modified to be conditioned on random noise instead of partial shape features; however, that is outside the scope of this paper which is point cloud completion.
**Can the method be used on datasets like ScanNet?**
Our method works on segmented point clouds and thus it is possible to work on ScanNet if segmented properly. However, our method would need to be modified to avoid producing completions that intersect with other occupied points in the scene, which we mentioned as future work in lines 328-329 of our main paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal from the authors. The answers from the authors are reasonable, and I decided to keep the positive score. | Summary: The paper proposes to reconstruct partial point cloud inputs using a multi-modal process, where the generator can output multiple plausible shapes. The key idea is to have a separate network (StyleEncoder) that extracts style from an input, in addition to a separate network (PartialEncoder) that extracts structural information, that are then combined to generate the final reconstruction. Another contribution is a multi-scale discriminator that checks real and fake pairs at multiple generator output stages. The entire pipeline is essentially a conditional GAN network for generating 3D shapes.
Strengths: The paper proposes to address an important problem where there could be multiple potential reconstructions given a partial input. Limitations of recent work (SeedFormer [7]) are addressed with a clever network design strategy. The method is evaluated on three public datasets (3D-EPN, GSO, and Part-Net) with promising quantitative results.
Weaknesses: The methodology section is difficult to follow, and I found it hard to understand the training process. In addition, it seems that the proposed technique is heavily based on previous work (e.g., SegFormer). It may help to explain the key technical contributions with respect to these previous works and defer the reader to the other papers for understanding technical details.
In addition, the validation of the proposed technique is rather limited. The training procedure, and validation steps are not clearly described. Is the method sensitive to shape orientation? What type of data augmentation is used during training, and is it trained on a per-category level (thus learning certain shape priors) or category-agonostic level?
I would have expected to see ablation experiments with same methodology but different network architecture. Finally, the authors evaluate on PartNet, yet, never quantify part level accuracy.
In Tables 5 and 6, ablation studies of the style code and discriminator have very similar MMD metric but large UHD metric. I did not understand the choice and motivation behind these metrics. Why not use standard Hausdorff distance instead?
Finally, the method is geared toward generative multiple outputs given a single input, yet this capability is never evaluated. I would have expected to see additional examples where multiple plausible reconstructions an input can have, followed by a user study or a qualitative evaluation of the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. On line 256, it is stated the input contains 1024 points and the output contains 2048 points. Why is there a mismatch in these numbers?
2. If the input is meant to be a partial point cloud, have the authors considered the effect of point density?
3. Are selected datasets benchmark datasets for partial shape reconstruction?
4. Is the goal of the multi-scale discriminator to address partial input (lack of ground truth for each structure/style pair, or is it to address structure at different scales)? Given the name and design, the naming and purpose is confusing.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper proposed clever modifications on an existing network (SegFormer) to allow it to generate multiple completions given a single partial input. The idea is general, but it does not appear that the method is well-validated outside of the specific architecture considered, and thus, its impact is not clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer R39m for their valuable and constructive comments.
**Paper is difficult to follow. Helps to state key contributions.**
We'd like to clarify any confusions you've had. Meanwhile, note that 2 other reviewers found the paper well-written. First, we want to note that our work is not related to SegFormer, but is related to the deterministic point cloud completion method SeedFormer [7].
Our main contributions are outlined in lines 63-69 of our paper. In particular, we propose:
- A diverse seed generator which produces diverse completions via style modulation of partial shape features (see lines 179-182)
- To learn the style codes used in style modulation from the distribution of complete shapes via a style encoder (see lines 151-158)
- Discriminators at multiple scales to enable shape completion without access to multiple ground truth completions per partial input (see lines 205-207 and 217-218)
- A diversity penalty at multiple scales which prevents conditional mode collapse (see lines 226-232)
None of these exist in SeedFormer, which is completely deterministic in the completion it generates.
**Validation is limited. Training and validation is not clearly described. Data augmentation performed?**
We beg to differ about the claim that the validation is limited. We have quantitatively evaluated on 3 different datasets, compared against major baselines including a diffusion model, and performed multiple ablation studies. Training procedure, datasets, and metrics used can be found in lines 256-273 of our paper. We provided full dataset information and mathematical definitions of metrics in Sections 3 and 4 of our supplemental.
We did not perform any data augmentation to fairly compare against prior methods which also did not perform any augmentation. Our sensitivity to shape orientation is hence similar to prior work.
Our method is trained per category similar to other diverse shape completion works we compare against [8, 9, 10].
**Expected to see ablation experiments on architecture. Part level accuracy was not quantified for PartNet**
We provide additional ablations on network design in our supplemental, where we compare our PointConv-based partial encoder with SeedFormer's partial encoder in Table 1 and compare our proposed diversity penalty with an alternative baseline diversity penalty in Table 2.
We have included a new ablation on our losses in the table presented in our global response.
We do not evaluate part level accuracy in PartNet because for the task of multimodal shape completion, it is not required to completely reconstruct all the ground truth shapes. Also, it is not possible to evaluate bidirectional metrics such as Chamfer distance or MMD as completions are not predicted at the part level; thus there is no notion of what part a point belongs to. Additionally, one-side metrics such as UHD would make incorrect penalizations as not all parts need to exist in the completions to be considered valid. For example, a ground truth chair may have arm rests; however, if a partial point cloud of the chair only contains its legs, completions of the chair without arm rests may still be valid completions. Also, note such a metric is not computed in any of the works we compare against [8, 9, 10, 13].
**Why do ablation results have similar MMD but large UHD? What is the motivation of these metrics? Why not standard Hausdorff distance?**
The UHD we used is the standard one-sided Hausdorff distance. MMD performs an average over distance to nearest neighbors while UHD perform a max over them and thus is more susceptible to large variations in the presence of noisy completions (see Section 4 of supplemental for mathematical definitions of metrics).
MMD measures completion quality and coverage of the test set and UHD measures fidelity to the partial input. These are standard evaluation metrics for the task of diverse shape completion [8, 9, 10, 13].
**Method is geared toward generating multiple outputs but capability is never evaluated.**
Note that the TMD metric quantitatively evaluates the diversity of generated outputs. Besides, qualitative examples of diverse completions are shown in Figures 1, 4 and 5 of our main paper. In the supplemental, we shared more visual examples of our method producing diverse completions per partial input. We will add a sentence to our main paper mentioning that more visual examples can be found in our supplemental.
**Why does input have 1024 points and output have 2048 points?**
We follow the standard convention for multimodal shape completion which is to use a 1024 point partial input and produce a 2048 point completion as output [8, 9, 10, 13].
**Is input point density considered?**
We make no assumption on point density as it can vary heavily across shapes and viewpoints. 3DEPN and GSO inputs are generated by lifting depth maps into 3D and thus exhibit non-uniform density. For PartNet, we uniformly sample points in the parts kept; however, the number of parts and size of each part can still vary.
**Benchmark datasets for diverse completion task?**
3DEPN and PartNet datasets are benchmark datasets for diverse shape completion that previous works have evaluated on [8, 9, 10, 13]. We also evaluated on Google Scanned Objects to see how our method performs when trained with real data.
**Goal of multi-scale discriminator?**
We introduce adversarial learning to address lack of ground truth for each structure/style pair (lines 205-207 of our paper).
The goal of our multi-scale discriminator is to discriminate completions at different output resolutions. At coarser levels, points represent the skeletal structure of the shape, with each point being more important to whether the shape looks realistic. We find that discriminating at these coarse levels along with the finer levels helps with shape completion. We show a completion result using single vs. multiple discriminators in Figure 5 of our supplemental.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for providing a detailed rebuttal. I have misunderstood parts of the paper and the rebuttal helped clarify, therefore, I increased my score. In the method section, please focus on the contributions of the approach, just like you did in the rebuttal , so it is easy to decipher which part is novel and what is borrowed from prior work. A potential and partial solution is to rename subsections to include key ideas (e.g., style encoder via style modulation).
The sentence on line 157-157 that begins with "our style codes explicitly carry style information about a shape category.." is confusing since you only train per category, not across categories.
In figures 1,4,5 where you include multiple completions, please add labels to explain that each (blue) shape is a possible completion.
Information about train/val/test splits should be included in the main manuscript, not in the supplementary material.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback and suggested changes. We will do our best to incorporate these changes in the final version of our paper. | Summary: The paper proposes a diverse shape completion method by extracting style codes from complete shapes and learning a distribution over them. Moreover, diversity penalties and discriminators at multiple scales are introduced as well to prevent conditional modal collapse to generate various object shapes. To verify the effectiveness of the method, various experiments are conducted, and promising results are observed.
Strengths: 1. The paper is well written and organized.
2. The methods are interesting and promising results are obtained.
Weaknesses: 1. According to line 156, “style codes” carrying category information are learnt from the distribution of complete shapes, however, the goal is to generate diverse shapes not only among different classes but also within the same class. Hence, how to get the diverse information for a single class seems missing in the paper.
2. What is the difference of style encoder technique compared to VAE? According to the illustration in Sec. 3.2, it looks the same to VAE, which potentially results in limited novelty of the proposed style encoder.
3. It would be better to also show some ablation studies on the network architectures by removing specific components to see how different parts contribute to the overall performances, which could be done by settings specific loss's weight to 0.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Some limitations are discussed in detail. To make a more comprehensive comparison with other SOTA works, possible memory consumption or time efficiency could be illustrated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer ohCS for their valuable and constructive comments.
**Diverse information for a single class seems to be missing.**
Our method is trained per category similar to other multimodal shape completion works such as cGAN [8], IMLE [9], and PVD [10]. Thus, the "style codes" we learn carry different style information from complete shapes within a category rather than across categories. In Figure 2 of our attached rebuttal PDF we project style codes to 2D space using PCA and show that neighborhoods in 2D space contain shapes with similar characteristics/styles.
**What is the difference of style encoder technique from VAE? It looks similar to VAE and thus there is potentially limited novelty of the style encoder.**
In principle, the style encoder uses similar loss functions and sampling tricks as VAE. However, there is a significant difference as we seek to make the reconstruction capability from the style encoder **worse** by using less dimensions and an inefficient PointNet encoder. The reason is that the input of the style encoder is the ground truth shape and if one reconstructs it directly from the style encoder, then the encoding from the partial input would have little purpose, which hurts generalization and diversity. Table 4 shows our finding that by making the style encoder **worse** by using less dimensions and adding noise, we were able to take more information from the other partial input encoder and achieve better diversity and reconstruction error during test time. We believe this is a novel finding.
**Would be better to show some ablations on network architecture by removing components/losses**
In our global response we have provided a new table where we train our method with each of $L_{comp}$, $L_{div}$, and $L_{part}$ set to 0. Please see our global response for further comments on this ablation.
**Would be nice to show memory consumption or time efficiency**
In the last paragraph of Section 4.1 (lines 297-299) of our main paper we provide a comparison to other methods of average inference speed for producing 10 completions of a single partial input. We find that our method is capable of near real-time speeds and is 500 times faster than diffusion-based method PVD [10].
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, and most of my concerns are addressed. Hence, I would like to keep my positive rating. One minor concern is that TMD and UHD in ablation studies actually show a bit worse performances when overall losses are used.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response. We hope we can address the minor concern you have.
While the TMD is slightly worse when using all our losses, this does not necessarily correspond to worse diversity. As we mention in our global rebuttal, TMD is easy to increase when completion quality (MMD) and fidelity to the partial input (UHD) are poor, as noisy completions are one way to produce high TMD. Note that the two cases where TMD is higher (w/o $L_{comp}$ or w/o $L_{part}$), the MMD and UHD are worse suggesting that the TMD may be driven up through noise rather than more diverse completions.
In regards to UHD, we make a trade off that we believe is worthwhile. When using all our losses compared to w/o $L_{div}$, we sacrifice performance on UHD slightly for a large increase in TMD. We would also like to point out that the UHD we obtain with all our losses still significantly outperforms the UHD obtained by other multimodal shape completion methods we compare against. | Summary: The goal of multimodal shape completion is to generate many different plausible completions of an incomplete shape. Based on the conditional GAN, this paper introduces two key concepts to improve the diversity and accuracy of multimodal completion. One is to use style codes instead of random noise, which means better distribution of complete shapes. The other is the use of multiscale discriminators to refine the predicted shapes from coarse to fine. The method is evaluated on both synthetic and real data sets and shows significant improvements over comparable methods based on standard metrics.
Strengths: 1. The results of multimodal completion look very clean and diverse. The numerical results show a new SOTA.
2. The use of the learned style codes instead of Gaussian noise is reasonable since the former is more similar to the distribution of complete shapes.
Weaknesses: 1. Ablation study with and without $\mathcal L_{comp}$ and $\mathcal L_{div}$ is not provided.
2. The flat and thin structures of the shapes generated by the proposed method look good, while other methods are more noisy. For the task of multimodal completion, this is quite impressive.
According to Figure 5 in the supplementary file, the main reason is probably the use of multiscale discriminators, since the results of a single scale are noisy. Is this true? And why are 4 scales used? If more than 4 scales are used, is the quality continuously improved?
3. The distribution of style codes should better reflect the actual shape distribution. To better verify this, a visual analysis comparing the distribution of the different conditional codes is helpful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is this method applicable to unseen categories?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer iNun for their valuable and constructive comments.
**Ablation study on $L_{comp}$ and $L_{div}$**
In our global response we have provided a new table where we train our method with each of $L_{comp}$, $L_{div}$, and $L_{part}$ set to 0. Please see our global response for further comments on this ablation.
**Is the multi-scale discriminator the reason for sharp/clean completions? Why are 4 scales used?**
Yes, we find that when using a single global discriminator it is not able to effectively discriminate against real and fake point clouds and hence we use several of them at different point cloud resolutions/scales. We tried out more complicated discriminators involving PointConv and attention mechanisms but were not able to successfully train with such methods, which is in line with the findings of Wang et al. [55] (see also lines 214-218 of our paper).
We also find our loss functions $L_{comp}$ and $L_{part}$ to be important for producing sharper completions (see our new table and comments on this presented in our global response).
We use 4 scales as our method makes use of the upsampling procedure from the SOTA deterministic point cloud completion method SeedFormer [7], which also uses 4 layers (an initial coarse completion + 3 upsampled completions). Note we start with a coarse completion of 256 points and upsample by a factor of 2 at each upsampling layer to produce a final completion of 2048 points. We did not explore using more than 4 scales since if we introduce more scales past 2048 points we would not be able to fairly compare against all the other multimodal shape completion methods which produce 2048 points as output.
**Visualization of distribution of style codes**
In Figure 2 of our attached rebuttal PDF we share a plot of our learned style codes projected into 2D space using PCA. By querying random 2D points and visualizing the corresponding ground truth shapes in the neighborhood we find that the shapes in each cluster/neighborhood tend to share some characteristic/style. Such a result suggests that our style encoder is effectively learning to extract styles from the distribution of complete shapes.
**Is method applicable to unseen categories?**
No, our method is trained per-category similar to cGAN [8], IMLE [9], and PVD [10]. | Rebuttal 1:
Rebuttal: We thank the reviewers for taking the time to read and review our work. We have tried our best to answer and address any questions and clarifications in each individual reviewer response. In the rest of our global response, we discuss a common ablation that was requested across several reviewers and share some comments on some additional figures in the PDF we have attached for our rebuttal.
--------------------------
### Ablation of loss functions:
| | MMD ($\downarrow$) | TMD ($\uparrow$) | UHD ($\downarrow$) |
| :--- | :----: | :----: | ---: |
| w/o $L_{comp}$ | 1.62 | 4.61 |4.58 |
| w/o $L_{part}$ | 1.81 | 5.97 | 13.03 |
| w/o $L_{div}$ | 1.70 | 0.41 | 3.57 |
| Ours | 1.50 | 4.36 | 3.79 |
As requested by several reviewers, we have performed an ablation on our loss functions in the table above. Without $L_{comp}$ our method has to rely on the discriminator alone for encouraging sharp completions in the missing regions. This leads to a drop in completion quality (MMD). It also hurts partial reconstruction quality (UHD) as our completion loss $L_{comp}$ provides further gradient signal to the partially observed regions as well. Without $L_{part}$, completions fail to respect the partial input, leading to poor UHD. We observe that failing to respect the partial input also leads to a general degradation in completion quality (worse MMD). With the removal of either of these losses, we do observe an increase in TMD; however, we note that TMD is trivial to increase with worse completion quality as noise can be simply introduced. Without $L_{div}$, we observe TMD drastically decreases towards 0, suggesting no diversity in the completions. This difference suggests how crucial our diversity penalty is for preventing conditional mode collapse. We observe that when using all three losses, our method is able to obtain good completion quality, faithfully reconstruct the partial input, and produce diverse completions. We'd include this study in the final version.
--------------------------
### Visualizations in attached PDF:
In Figure 1 of our attached PDF, we share several completions (in blue) of a partial input and each completions nearest neighbor (in yellow) to a ground truth complete shape in the training set. Note that our method produces a different nearest neighbor for each completion of a partial input, showing that our method can overcome conditional mode collapse. Additionally, each nearest neighbor is similar to the partially observed region and varies more in the missing regions, suggesting that our method is capturing plausible diversity in our completions that matches with variance in the ground truth shape distribution.
In Figure 2 of our attached PDF, we plot our learned style codes from shapes in the training set by projecting them into 2D using principal component analysis (PCA). To better understand whether our style encoder is learning to extract style from the shapes, we visualize the corresponding shapes in random neighborhoods/clusters of our projected data. We find that the shapes contained in a neighborhood have a shared style or characteristic. For example, the chairs in the brown cluster all have backs whose top is curved while the black cluster has chairs that all have thin slanted legs.
Pdf: /pdf/6b5732f5e1968ed43828cc02b6978416767ed393.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes a novel GAN for diverse shape completion from partial point clouds. To enable diverse completions, a style-based generator is introduced that leverages style codes from a learned distribution of complete shapes for style modulation. Further, a multi-scale discriminator and a diversity penalty are proposed for better diversity.
Strengths: The paper is well-written and easy to follow. The experimental evaluation is thorough and supports the main claims in the paper. Both quantitative and qualitative results demonstrate the efficiency of the proposed approach over existing works.
Weaknesses: While the supplementary includes an ablation with different diversity penalties, it should also contain an experiment without the diversity penalty for completeness. As the diversity penalty is one of the contributions of this work, I would consider moving the study to the main paper.
It would further be interesting to report the nearest neighbors in the training set for the completed regions in the results because the style encoder learns a distribution over the completed shapes.
A discussion of failure cases should be added to the supplementary.
For completeness, the missing values in Table 3 for “Ours \dagger” should be added.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please also see Weaknesses.
How can diffusion-based methods, e.g. LION, be extended for point cloud completion? Would it be possible to train a variant that was conditioned on the partial point cloud (i.e. similar to 2D image inpainting with diffusion models) and report this as a further baseline?
While diffusion models are limited wrt to inference speed, it would be interesting to include them as a baseline due to their strong generative performance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations were discussed but a broader impact session should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer A3pQ for their valuable and constructive comments.
**Diversity penalty ablation**
In our global response we have provided a new table where we train our method without diversity penalty $L_{div}$. Please see our global response for further comments on this ablation. We'll adopt our diversity penalty ablations to the main paper in the final version.
**Report nearest neighbors in the training set for completions**
In Figure 1 of our attached rebuttal PDF we have included some examples of the nearest neighbor in the training set to a completion. Please see our global response for further comments on this. We will add this to our supplementary material in our final version.
**Discussion of failure cases**
We have included a few examples of failure cases in Figure 3 of our attached PDF for our rebuttal. Similar to any other generative model, our method occasionally fails to produce plausible completions. We observe that the occasional failures that happen are usually either due to missing thin structures or some noisy artifacts. We will add this to our supplemental in our final version.
**Missing values in Table 3**
Note that for the methods denoted by $\dagger$ in Table 3, we evaluate using the protocol of PVD [10]. In particular, we produce a completion with 2048 points, subsample 1024 of them, and then directly concatenate the subsampled 1024 points to the partial input. In this case, it is not useful to evaluate UHD as it will always be 0 and thus we simply mark it with an asteriks.
**Extending LION [31] to point cloud completion as a baseline to compare against diffusion models**
While modifying unconditional shape generation methods like LION to handle point cloud completion could be done, such a modification would require design/architectural choices on how to incorporate the partial point cloud input. This would most likely take a significant amount of effort to produce a reasonable baseline. Instead, we compare our method to Point-Voxel Diffusion (PVD) [10] which is a diffusion-based method that can perform diverse shape completion. We show our method outperforms PVD quantitatively and qualitatively, meanwhile being 500 times faster than it, in Table 2, Table 3, and Figure 5 of our main paper as well as Figure 3 of our supplementary material.
**Broader impacts**
As a point cloud completion method, it has the potential to be used in robotics applications such as planning. As of right now, we do not foresee immediate negative social impacts until our method is integrated into real robots. However, incorporating such a completion method into robotics applications is a scope for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the additional ablation study. This answers my questions. | null | null | null | null | null | null |
PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance | Accept (poster) | Summary: The current restoration approaches based on diffusion prior rely on prior knowledge of the degradation process, and thus fail to seamlessly adapt to different scenarios. Motivated by this, the paper proposed a " partial guidance" approach to directly modeling the distribution properties of high-quality images and then exploit it to serve as guidance for the diffusion process. The extensive experiments demonstrate the advantage of the proposed approach.
Strengths: - The proposed method diverges from traditional practices of modeling the degradation process, and instead, focuses on modeling the desired visual properties of high-quality images. The learned vision cues are then served as diffusion guidance for the generation process. This motivation and the proposed model are technically sound to me. As demonstrated in the paper, it showcases adaptability to different degradation situations, yielding outstanding results.
- The experiments conducted in the study seems solid, featuring comprehensive comparisons with various baseline approaches.
Weaknesses: - For face restoration, it's important to make sure the restoration is performed in an identity-preserving way, and quantitative evaluation regarding this is crucial. However, such quantitative evaluation seems to be missing in the paper.
- To clarify further, I have no doubt the image quality of the proposed approach outperforms the baseline, but I'm concerned if the quality is improved at the cost of degradation in identity. So it's important to get a quantitative evaluation regarding this
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How would the proposed method compare to the pre-trained text2img diffusion model?
- Let me explain for further clarification: with the current pre-trained text2img diffusion model (stable diffusion, for example), one can perform image enhancement using img2img pipeline, where you start with an inversed image (plus a small amount of noise) and guide the diffusion process with text prompt like "a high-resolution high-quality photo". It would be interesting to add such an experiment for comparison.
- for you reference, by "img2img" pipeline I mean: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/img2img
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No major limitation in terms of societal impact
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: **Identity-preserving evaluation on blind face restoration.** We provide a quantitative evaluation regarding identity-preserving issues in the table below. Considering the importance of identity-preserving in blind face restoration, we introduced reference-based restoration in Sec. 4.5 of the manuscript in addition to the general restoration in Sec. 4.1. Please note that existing methods do not have a mechanism to accept additional identity or reference guidance.
Since we employ heavy degradation settings when synthesizing CelebRef-HQ, it is noteworthy that identity features are largely distorted in severely corrupted input images (see Fig. 6 of the rebuttal material). Thus, it is almost impossible to predict an identity-preserving face without any additional identity information. However, with our reference-based restoration, we observe that the restored face with a high-quality reference image of the same person helps generate personal characteristics which are very similar to the ground truth. The large enhancement of identity preservation can also be seen in the table below, where our reference-based methods achieve the highest IDS, increasing by 0.32 and significantly higher than the other methods. Thanks to the flexibility of our framework, high-quality reference images effectively contribute to restoring low-quality inputs especially in terms of identity preservation.
| *Blind Restoration* | **Ours (w/ ref)** | **Ours (w/o ref)** | **DifFace** | **GFPGAN** | **CodeFormer** |
|-----------|:------------:|:--------:|:-----------:|:----------:|:--------------:|
| **IDS**$\uparrow$ | **0.76** | 0.44 | 0.56* | 0.36 | 0.55 |
**Comparison between our method and text2img diffusion model.** As suggested by the reviewer ggEw, we compare the results generated by img2img stable diffusion model guided by different text prompts in Fig. 5 of the rebuttal material. We observe that a general text prompt like “A high-resolution high-quality photo” fails to generate a truly high-quality restored output given a severely corrupted input. Adding more detailed descriptions (e.g., “boy”, “smiling boy”, and “grinning boy”) of the input image helps enhance the output quality, but at a cost of significant loss of fidelity and identity. By comparison, our method obviously outperforms the text2img diffusion model in terms of both quality and fidelity.
---
Rebuttal Comment 1.1:
Comment: We appreciate your reviews and comments. Since it is close to the end of the discussion period, may we ask if our rebuttal has resolved your concerns?
If you have any further questions, please do not hesitate to reply to our responses. Thank you again for all the constructive comments. | Summary: This paper proposes to use some simple properties to guide the reverse diffusion process. The proposed approach makes no assumptions about the degradation process. This paper also shows many different face restoration visual results to demonstrate the superiority of the proposed method.
Strengths: 1. The proposed method has been experimented with in many different face restoration tasks, which demonstrates that the proposed method can be easily adapted to other face restoration tasks.
2. Overall, this paper is well-writing and easy to understand.
Weaknesses: 1. The main contribution of this paper is using some simple properties to guide the reverse diffusion process, which is inherited from classifier guidance [7]. What is the technical and original contribution of this paper? Please discuss the technical differences between the proposed work and classifier guidance [7].
2. How to choose the properties that are used in different tasks? This deserves an ablation study. Please show the influence of some different properties.
3. This paper claims one of the advantages of the proposed method is no assumption of the degradation process. As far as I know, the DifFace also does not need to know the degradation process. What is the advantage of the proposed method against DifFace?
4. No quantitative results are found in the manuscript. Only visual results are not enough to evaluate the proposed method. Please provide a fair quantitative comparison against other comparing methods such as commonly used datasets CelebA-Test, and three real-world testing datasets proposed in GFPGAN.
5. The inference time is also important to evaluate the proposed method. Please provide the inference time comparison again with other comparing methods.
6. Some failure cases can contribute to understanding the limitation of the proposed method. Please show some failure cases.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see Weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper proposes to use some simple properties to guide the reverse diffusion process. But this paper has not discussed the limitations of the proposed method. I suggest the author discuss the potential and limitations of the proposed method in more detail such as showing some failure cases, which will make the contributions of the proposed method more significant.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Technical differences between our method and classifier guidance.** To model the desired properties of high-quality images, we devise an instantiation named partial guidance by adapting classifier guidance in image restoration (IR) tasks. While the rough idea is inherited from the classifier guidance framework, we extend the classifier from a label predictor to a dense predictor corresponding to a specific image property in the image restoration task. More importantly, we propose special design for the classifier network (see A.3 in the supplementary), dynamic guidance scheme (see Sec. 3.2 of the manuscript), and composite guidance (see Sec. 3.2 of the manuscript), which are all crucial to allow flexible property combinations solving versatile image restoration tasks. To our knowledge, we are the first to extend classifier guidance to an IR paradigm where the degradation process is not needed.
**Choice of Properties.** The choice of properties for each task is listed at Tab. 1 of the manuscript. The selected properties are common ones involved in each task. For example, lightness and color values make up a colorized image. In colorization tasks, it is natural for us to guide on these two properties. For blind face restoration, guidance may only come from the input image. We find that using a clean version of the input image gives better performance than using the original input image (see Fig. 4 of the rebuttal material). As for extending general face restoration to reference-based restoration, identity features (e.g., ArcFace embedding) are widely used to extract personal characteristics. Thus, we propose to guide on the identity features of a reference image of the same identity as the input.
**Advantages of the proposed method against DifFace.** Our method proposes a more general framework than DifFace, and DifFace is in fact a special case of our method when tackling blind face restoration. Specifically, DifFace is equivalent to applying partial guidance at only one iteration step multiple times until convergence. As for the application scope, while DifFace is confined to only blind face restoration, our method is versatile at multiple tasks thanks to the flexibility of our framework. Furthermore, extensive qualitative and quantitative results together show that our method achieves better quality and fidelity in the blind face restoration task than DifFace.
**Quantitative evaluation and user study.** Please refer to the global response in "Author Rebuttal" for detailed elaboration.
**Inference time.** Table below shows the inference time comparison with other methods. The results are tested on GeForce RTX 3090, with $512 \times 512$ input images. Although the submitted version is based on DDPM, which takes 1000 steps during sampling, we also provide an accelerated version by using DDIM without a major performance drop (FID: [DDPM] 115.99 vs [DDIM] 117.99 on inpainting for example). Further acceleration such as distillation is left as our future work. The inference time can thus be about 10 times faster, which is comparable to almost all the other diffusion-based methods.
| *Inference Time* | **Ours (DDPM, 1000 steps)** | **Ours (DDIM, 100 steps)** | **GDP** | **DDNM** | **DifFace** | **CodeFormer** | **GFPGAN** |
|----------|----------|-----------|---------|----------|-------------|----------------|------------|
| Time (s/img) | 118.18 | 12.02 | 149.62 | 15.78 | 3.96 | 0.05 | 0.03 |
**Limitations.** Please refer to the global response in "Author Rebuttal" for detailed elaboration.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer hvsJ
Comment: Thanks for the rebuttal and only some of my concerns are addressed.
Given the extensive nature of the revisions required for the manuscript, I maintain my recommendation.
---
Reply to Comment 1.1.1:
Comment: We understand that you have some remaining concerns, and we apologize for any oversight in fully addressing them all in the rebuttal.
Please let us know what your remaining concerns are after reading our rebuttal. We will carefully review your comments once again and try our best to address your concerns.
---
Reply to Comment 1.1.2:
Comment: Dear Reviewer hvsJ,
As we approach the conclusion of the discussion period, we wish to revisit your outstanding concerns to ensure that we can provide additional clarification or make necessary revisions. If there are any unresolved matters that necessitate further attention, please kindly inform us.
We sincerely value your valuable time and dedication in assisting us in improving the quality of our manuscript.
Best Regards,
The Authors | Summary: This paper proposes partial guidance, an approach exploiting pre-trained diffusion models for face restoration. Instead of making assumption about the specific degradation process, partial guidance models properties of high-quality images such as structure and color statistics to implement classifier-guidance during the reverse diffusion process. This approach suits a range of restoration task and can be extended to composite tasks. Experiments demonstrate the effectiveness of the proposed method.
Strengths: - This paper considers exploiting image properties irrelevant to specific degradation process to tackle versatile face restoration problems, which has demonstrated effectiveness in experiments.
- The qualitative results of the proposed method provided in the paper and the supplementary material are sufficient and impressive.
Weaknesses: - Except blind face restoration task, more quantitative results are expected in the remaining tasks like face colorization and face inpainting.
- In blind face restoration task, the pretrained restorer (such as Real-ESRGAN stated in the supplementary material) is employed and finetuned to predict smooth semantics, but these intermediate results are missing, and the improvement of the proposed method compared to the pretrained/finetuned restorer is not demonstrated or analyzed.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In blind face restoration task, in addition to real-world data, why synthetic data (like degraded CelebA/CelebA-HQ) is not used for evaluation of more quantitative metrics (like PSNR, SSIM, LPIPS), which is adopted in previous work like DifFace.
- Other concerns have already been mentioned in Weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors have addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Quantitative evaluation and user study.** We believe that PSNR and SSIM fail to reflect the true image quality. For example, in Tab.1 of CodeFormer, input images achieve the third highest PSNR scores and the highest SSIM scores across all methods. Thus, we exclude them from full-reference metrics in our evaluation and employ LPIPS and IDS instead. Please refer to the global response in "Author Rebuttal" for detailed elaboration.
**Visualization of intermediate results.** As shown in Fig. 3 of the rebuttal material, given an input low-quality image, we visualize intermediate results during the sampling process. $f(y_0,x_t,t)$ shows the smooth semantics predicted by a pretrained restorer at time $t$, which serves as guidance for sampling $x_{t-1}$. We observe that the pretrained restorer can only predict faces without rich details and sharpness at the early stage. Thanks to our design of including $x_t$ alongside $y_0$ as the input to $f$, the prediction results of the pretrained restorer benefit from $\hat{x} _0$ which is growing sharper, and in turn enhance the sharpness of the $x _{t-1}$ sampled.
---
Rebuttal Comment 1.1:
Comment: We appreciate your reviews and comments. Since it is close to the end of the discussion period, may we ask if our rebuttal has resolved your concerns?
If you have any further questions, please do not hesitate to reply to our responses. Thank you again for all the constructive comments.
---
Rebuttal Comment 1.2:
Comment: Thank you for your rebuttal.
I will increase my score to 5 since most of my concerns are addressed.
---
Reply to Comment 1.2.1:
Comment: We are pleased that our response has addressed your concerns. Your invaluable comments are deeply appreciated! | Summary: This paper proposes a novel solution for blind face restoration. Instead of modeling the degradation process, the authors propose to model the desired properties of high-quality images as classifiers. Similar to guided diffusion, the authors guide the diffusion generation process with specific classifiers to achieve image restoration. Visual results well proved the effectiveness of the proposed methods.
Strengths: 1. The idea of modeling the desired properties of high-quality images as classifiers is novel and interesting.
2. The proposed method can solve blind image restoration and can restore images following the reference properties, e.g., color and identity, which is novel and practical.
3. The paper is well-written and easy to follow.
Weaknesses: 1. There seems no quantitative evaluation. It is not persuasive enough with only visual comparisons.
2. The setting of hyperparameters may be difficult.
3. Some overclaims. For example, the author said in line 115, "Our partial guidance does not assume any prior knowledge of the degradation process." However, PG needs to know the degradation type. For inpainting, the mask is also needed. (2)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Please see the weaknesses.
2. I am curious about the effectiveness of PG on natural image restoration. For example, what about applying PG on stable diffusion?
3. For the colorization task, the lightness constraint is linear and can be analytically solved in a similar way to DDNM. I wonder if it works to simply apply Color Statistics to DDNM?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Though the evaluation is poor, I still give a relatively positive score for the time being and hope that the authors can supplement quantitative experiments (and subjective questionnaires for blind restoration) and objectively discuss the problems existing in practical applications. After all, this work is about blind restoration. If there is a new method that works well, it will be very helpful to the community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Quantitative evaluation and user study.** Please refer to the global response in "Author Rebuttal" for detailed elaboration.
**Setting of hyperparameters.** As mentioned in A.2 of the supplementary material, while it is principally flexible to tune the hyperparameters case by case, we provided a set of default parameter choices for each homogeneous task in Table 1.
**Statement of our agnosticism to the degradation process.** We intended to emphasize the agnosticism of degradation process on blind restoration. Existing works that employ guided diffusion for image restoration (DDRM, DDNM, GDP) require prior knowledge on the degradation process (e.g., downsampling kernels). By modeling the desired properties of high-quality images, we bypass this requirement. Thank you for pointing out the inpainting case where a mask is needed, and we will revise our claim to exclude the inpainting task.
**Applying partial guidance on natural image restoration.** Due to the different nature of human faces and natural images, we plan to explore partial guidance on natural images in our future work. Nevertheless, we have made some early explorations demonstrating the feasibility. We provide some natural image restoration results in Fig. 1 of the rebuttal material. In addition, we also provided natural image inpainting and colorization results in Fig. 7 of the supplementary material. Since stable diffusion performs diffusion process in the latent space, it requires careful design of appropriate properties in cases where existing ones are not applicable. This aspect will be further investigated in our future work.
**Applying color statistics on DDNM.** As shown in Fig. 2 of the rebuttal material, we apply our partial guidance on DDNM by guiding on $\hat{x} _{0|t}$ (computed by DDNM) with AdaIN($\hat{x} _{0|t}$). DDNM produces more vivid and natural colorized outputs with various color statistics given as guidance. It shows that our partial guidance can be flexibly applied in the DDNM framework for further improvement.
**Limitations.** Please refer to the global response in "Author Rebuttal" for detailed elaboration.
---
Rebuttal Comment 1.1:
Comment: Your rebuttal dispelled most of my concerns and I will increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: We are glad that our response can resolve your concerns. Thank you so much for your invaluable comments! | Rebuttal 1:
Rebuttal: We are encouraged that the reviewers find our work novel and interesting [Reviewer GVHx, ggEw]; practical and versatile in multiple image restoration tasks [Reviewer GVHx, FgM3, hvsJ, ggEw]; presenting impressive and outstanding visual results [Reviewer GVHx, FgM3, ggEw]; well-written and easy to follow [Reviewer GVHx, hvsJ].
In light of the common concerns about more quantitative evaluation and discussion on limitations of our method, we supplement with a detailed discourse on these matters here.
**Quantitative evaluation and user study.**
**For blind face restoration.** We provided a quantitative comparison in Tab. 2 of the supplementary, evaluating on three *real-world* datasets LFW-Test, WebPhoto-Test, and WIDER-Test. Our method achieves best or second-best scores across all three datasets for both FID and NIQE metrics.
We also include a quantitative evaluation on the *synthetic* CelebRef-HQ dataset in the table below. Considering the importance of identity-preserving in blind face restoration, we introduced reference-based restoration in Sec. 4.5 of the manuscript in addition to the general restoration in Sec. 4.1. As shown in Fig. 2 of both the manuscript and supplementary materials, since DifFace, GFPGAN, and CodeFormer are relatively more competitive than others, we make quantitative comparisons of our methods against them. The table below shows that our methods achieve best or second best scores across both no-reference (NR) metrics for image quality (i.e., FID and MUSIQ) and full-reference (FR) metrics for identity preservation (i.e., LPIPS and IDS).
Since we employ heavy degradation settings when synthesizing CelebRef-HQ, it is noteworthy that identity features are largely distorted in severely corrupted input images (see Fig. 6 of the rebuttal material). Thus, it is almost impossible to predict an identity-preserving face without any additional identity information. Nevertheless, with our reference-based restoration, we observe that the restored face with a high-quality reference image of the same person helps generate personal characteristics that are highly similar to the ground truth. The large enhancement of identity preservation is also indicated in the table below, where our reference-based method achieves the highest IDS, increasing by 0.32.
| *Blind Restoration* | **Ours (w/ ref)** | **Ours (w/o ref)** | **DifFace** | **GFPGAN** | **CodeFormer** |
|-----------|:------------:|:--------:|:-----------:|:----------:|:--------------:|
| **FID**$\downarrow$ | 121.25* | **119.98** | 123.18 | 186.88 | 129.17 |
| **MUSIQ-KonIQ**$\uparrow$ | 64.67 | 67.26* | 60.98 | 63.33 | **69.62** |
| **LPIPS**$\downarrow$ | 0.35* | **0.34** | 0.35* | 0.49 | 0.36 |
| **IDS**$\uparrow$ | **0.76** | 0.44 | 0.56* | 0.36 | 0.55 |
In addition to the qualitative and quantitative evaluations on blind face restoration, the results of a user study are shown in the table below, with an involvement of 30 participants. Each participant is shown 30 randomly sampled image triplets and asked to select a visually better restored output. Each triplet is composed of one corrupted input, one from our method, and one from a randomly chosen method. It is observed that our results are preferred by participants compared to DifFace and GFPGAN. Considering CodeFormer is a task-specific method that requires careful multi-stage training, our method is able to achieve a comparable performance with CodeFormer even without extensive training on the task.
| *User Study* | **DifFace** | **GFPGAN** | **CodeFormer** |
|-----------------|:--------:|:-------:|:--------------:|
| **Favoring Ours**$\uparrow$ | 72.60% | 72.01% | 51.22% |
**For face inpainting and colorization.** We provide a quantitative evaluation on the *synthetic* CelebA-Test dataset in the table below. We believe that the ability to produce diverse results is also crucial in these two tasks, and the evaluation should not be constrained to the similarity to the ground truth. Thus, we opt to employ NR metrics including FID, NIQE, and MUSIQ instead of FR metrics. Regarding the MUSIQ metric, it has been trained on different datasets featuring various purposes. For inpainting, we employ MUSIQ-KonIQ which focuses on quality assessment, while for colorization, we choose MUSIQ-AVA that puts more emphasis on aesthetic assessment. Although CodeFormer has a better score in NIQE for the colorization task, it clearly alters the input identity (see Fig. 6 of the supplementary) and requires training a separate model for each task. On the contrary, our method requires only a pre-trained diffusion model for both inpainting and colorization, and is able to achieve best scores across almost all metrics. The qualitative results in Fig. 3, 4 of the manuscript and Fig. 5, 6 of the supplementary also prove the superiority of our approach.
| *Inpainting* | **Ours** | **CodeFormer** | **DDNM** |
|-----------------|----------|----------------|----------|
| **FID**$\downarrow$ | **115.99** | 120.93 | 137.57 |
| **NIQE**$\downarrow$ | **3.65** | 4.22 | 5.35 |
| **MUSIQ-KonIQ**$\uparrow$ | **73.20** | 72.48 | 59.38 |
| *Colorization* | **Ours** | **CF** | **DDNM** |
|------------------|----------|--------|----------|
| **FID**$\downarrow$ | **119.31** | 126.91 | 146.66 |
| **NIQE**$\downarrow$ | 4.71 | **4.43** | 5.11 |
| **MUSIQ-AVA**$\uparrow$| **5.23** | 4.91 | 4.07 |
**Limitations.**
As our partial guidance is based on a pre-trained diffusion model, our performance largely depends on the capability of the model in use. The pre-trained face diffusion model we employed is trained on the FFHQ dataset, where side faces seldom exist. As a result, failures tend to occur on side faces (see Fig. 7 in the rebuttal material). Since CodeFormer and DifFace are also trained on the FFHQ dataset, we share similar issues on side faces.
Pdf: /pdf/d00ce5ec208de971b517cc682028a67dcabc2c8e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fast Conditional Mixing of MCMC Algorithms for Non-log-concave Distributions | Accept (poster) | Summary: This paper introduces a new concept of conditional convergence of an algorithm (i.e. convergence of the distribution restricted to e.g. a local mode), and presents extension of recent results using e.g. Poincare Inequality to bound this new measure.
Strengths: I think this paper tackles an important problem: sampling from multi-modal distributions is likely to get stuck in a local optima and so understanding how well it samples from that mode is sensible (particularly when thinking about how a sampling based approach compares to an optimisation one).
The idea and definition of conditional convergence is new and interesting -- and one could see a new body of research around this type of topic.
The theory is based on a single, simple results (Lemma 1) which can then be used to generalise existing results on convergence to conditional convergence.
Weaknesses: The main weakness is that the current result, e.g. Corollary 1, are weaker than one would hope. In particular
Corollary 1 seems to have a time-dependent step-size (so to understand pi_T we need a smaller step size as T increases). This is different from how algorithms are implemented.
The conclusion after corollary 1 does not make sense to me. “Either the probability mass … over S converges to 0, or…” : But whilst LMC can converge slowly, it still converges to pi (if we ignore the discretisation error). Thus surely for any set S that is not a null-set of pi we have pi_T(S) will be non-zero in the limit as T goes to infinity?
More generally, this corollary is quite the result you want. You want something that says “if I have a partition of the the state-space in sets S_1,S_2,\ldots, and observe that the proportion of time LMC stays in S_i is greater than … then the LMC output restricted to S_i will be close to pi restricted to S_i” or similar.
So I struggle to really understand the practical importance of the result.
Separately, the presentation could be improved in places:
E.g. missing “the”s for *the* local Poincare inequality, and *the* Poincare Figure information (and e.g. “fisher”->”Fisher”, gaussian to Gaussian etc.); also “Gibbs sampling on discrete state *spaces*” etc.
When describing Langevin Monte Carlo, there are two different approaches depending on whether you use a Metropolis correction. The paper could be clearer about what its results relate to. Form the definition in Algorithm 1 it looks like you are considering the unadjusted Langevin algorithm (ULA).
If you are using ULA, then there is a discretisation error — i.e. the stationary distribution for ULA is different to that of the Langevin diffusion/the distribution you want to sample from. I think the paper could be up-front about this, and discuss how this impacts the results in the paper. I also wonder if informal statements such as “The convergence of LMC” (top p.4) are really referring to the convergence of the Langevin diffusion and not ULA. The paper does comment on “PI .. implies the convergence of LMC when the step size h is small”, but this could be more precise. I guess part of the confusion is that you are commenting on converging, but it is not completely clear what you are converging to. In general the ULA will not converge to pi for a fixed (albeit small) step-size as the discretisation error will change the stationary distribution.
The argument under Fisher information seems to ignore the discretisation error — i.e. be just for the Langevin diffusion, but is says for LMC. (Though this is corrected in Proposition 1 which is for LMC). This part could have been improved with some references to the results you are summarising.
[It may be that a cleaner presentation would be to have had a subsection commenting on the approximation error between ULA and the Langevin diffusion, and then presenting the results just for the Langevin diffusion?]
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Can you more convincingly explain the practical importance of Corollary 1?
As part of this, it feels like the rate of convergence matters (as we have convergence of LMC in general, just that it can be very slow). The slowness of LMC is, I believe, linked to e.g. constants for convergence depending on features of the target (more separated modes implies slower convergence) -- so it feels like you want stronger results than Corollary 1 where you get a handle on the constants. I.e. are the results in Corollary 1 uniform over targets?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the importance of our problem. After reading the review, we find that there could be several misunderstandings. We begin with clarifying these misunderstandings and then address other concerns.
**On "convergence of LMC".** In this paper, when we talk about "convergence of LMC", we mean that "by running LMC with step size $h$ and total iteration $T$, we can obtain a distribution $\mu$ with distance (global or conditional) smaller than $\varepsilon$ from the target distribution $\pi$, where $h$ and $T$ can depend on $\varepsilon$". This definition of convergence has been employed by many existing works including [1,2]. The reviewer seems to misunderstand the "convergence" as "the step size $h$ does not depend on $\varepsilon$". We will include the above explanation of "convergence of LMC" in the revised paper.
**On Practical Importance of Corollary 1.** To address this point, we briefly recall the motivation and goal of our study:
(1)The global convergence of LMC in general non-log-concave cases has been proven to be **exponentially slow in the worst case**. For a simple example, consider the following mixture of two standard Gaussians:
> $p(x;c) \propto \frac{1}{2} e^{- (x+c)^2} + \frac{1}{2} e^{- (x-c)^2}$.
The global convergence is provably exponentially slow in the separation distance $c$.
(2) However, Corollary 1 shows the remarkable fact that **local mixing can be fast even when global mixing is provably slow.** For instance, for $p(x;c)$ above, $\pi_t \vert_{(-\infty, 0]}$ is $\epsilon$ close to $p(x;c) \vert_{(-\infty, 0]}$ in $O(1/\varepsilon^4)$ steps, **independent of $c$** (ditto for $[0,\infty)$). As the reviewer notes, **one would expect that** "more separated modes imply slower convergence", so it is highly remarkable that local convergence rate **in fact does not depend on the separation distance $c$!**
More generally, Corollary 1 guarantees polynomial-time sampling as long as **each mixture component has a good LSI constant**, which is a much more lax requirement than for the entire distribution to have good LSI. For instance, each component of $p(x;c)$ has LSI constant of $\Theta(1)$, but $p(x;c)$ has LSI constant of $\Theta(e^{-c^2})$.
(3) The catch to Corollary 1 is that local convergence says nothing about mixture weights, so you can have two very unbalanced modes. However, in many practical settings, even guaranteeing accurate samples within a mode is valuable (e.g. one might care more about generating accurate pictures of dogs/cats, and care less about the ratio of dog pictures to cat pictures.)
**Other concerns.**
**Q1**: Corollary 1 uses time-dependent step-size.
**A1**: In all our results, T stands for the total iterations that the algorithm runs. The T-dependent stepsize can be easily transformed into an epsilon-dependent step-size. For instance, in Corollary 1, for epsilon target accuracy we need $T = O(\alpha^2 d^2 /\epsilon^4)$, so the stepsize $\delta = \alpha d/\epsilon^2$. Perhaps you will find this formulation more familiar, and most MCMC papers do have stepsize depending on epsilon (e.g. in [1,2]).
**Q2**: "probability mass ... converges to 0" seems redundant.
**A2**: The reviewer also acknowledges that "LMC can converge slowly". In fact, the global convergence rate for a simple mixture of Gaussians can often be as slow as $e^{-d}$. It is therefore highly non-trivial that Corollary 1 has polynomial dependence on all problem parameters. As T goes to infinity, a "set S that is not a null-set of pi" will indeed eventually have significant mass, but this can take **an exponentially long time (see the Gaussian mixture example in "On Practical Importance of Corollary 1." above)**, so one cannot simply assume that every set S has sufficiently positive probability in practical settings when the total iteration number T is not exponentially in problem dimension d.
**Q3**:" this corollary is quite the result you want"
**A3** The reviewer's suggestion is in fact quite similar to our Corollary 1. Consider the following re-statement of Corollary 1, using similar phrasing as the reviewer's suggestion:
> “if I have a partition of the the state-space in sets S_1,S_2,\ldots, and observe that **the probability $Z_t$ stays in S_i** is greater than … then **$\bar{\pi_t}$ restricted to S_i** will be close to $\pi$ restricted to S_i”
The difference from the reviewer's suggestion is bolded. Qualitatively, Corollary 1 is almost identical to the reviewer's suggestion, except we consider probability at time $t$ ($\mu_t$) instead of path-average. We are thus unsure about what the reviewer means here. We will greatly appreciate it if the reviewer can elaborate on/clarify this point.
**Q4**: Presentation.
**A4**: We thank the reviewer for this suggestion and will correct the typos and update the manuscript. We will also further highlight (currently in line 113-118) that we consider the unadjusted Langevin algorithm and the analyses done in our work are for discretized algorithms.
**Q5**: Is Corollary 1 uniform over all targets?
**A5**: We are uncertain about the reviewer's use of the term "uniform," but we will attempt to clarify it in this context.
1. It is indeed possible to derive outcomes from Corollary 1 based on certain "features of the target." This is due to the rate in Corollary 1 being dependent on the local LSI constant $\alpha$, which in turn relies on the "features." In Section 4.3, we demonstrate how to calculate the local LSI using the "features."
2. Although the global convergence rate is also influenced by the "features," it can be exponentially slow and, therefore, impractical. This is exemplified by the mixture of two standard Gaussian distributions, as discussed earlier.
**References:**
[1]. Analysis of Langevin Monte Carlo from Poincar´e to Log-Sobolev
[2]. Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo
---
Rebuttal Comment 1.1:
Comment: My concern about Corollary 1 is that the narrative after the corollary was that either the chain will have small probability to S or the conditional distribution (given S) is well approximated. But this is not true. As an example consider the your mixture example for large c and a chain started at 0, with S=(0,infty). By symmetry pi_T(S)=1/2 for all T. So this is not an example where the chain will have a small probability for S.
In practice (for large c and finite T) the chain is likely to spend all its time in S or none of it in S. So half the time it will approximate pi restricted to S well, and the other half of the time it will not. (In fact it is not clear how you define your approximation to pi restricted to S in the 50% of the runs of the algorithm where it does not visit S.)
What you want the theory to stay is "given that the chain spends a substantial proportion of its time in S then it will approximate the conditional distribution of pi restricted to S well" but this is not what the result says.
Another way of thinking about it is that there are two cases where you have poor mixing for multi-modal targets where it is difficult to move between modes. One is where you start in or near one mode (or much closer to one mode than others). In this case I can see how your result makes sense. There will be one mode you are likely to approximate well, and all other modes for which the probability assigned to that mode will be small.
The other is where you start in the tails between two or more modes and there is randomness as to which mode you will find. In this case your result does not make sense. There will be a non-negligible probability associated with two (or more) modes. And for any run you can only approximate one of these well in practice. Thus, as with the above example, your result does not make sense.
---
Reply to Comment 1.1.1:
Title: Thank you for the response
Comment: We thank the reviewer for the response. After reading the reviewer's response, we realized that most concerns of the reviewer are due to a misunderstanding towards the definition of our distribution $\bar{\pi}_T$. We would like to first clarify this as follows.
**On the distribution $\bar{\pi}_T$**:
* By our definition on line 165, $\pi_t$, for $t\in[0,T]$, is the distribution of Langevin iterate $X_t$. We define $\bar{\pi}_T = \frac{1}{T} \int_0^T \pi_t dt$. Corollary 1 discusses convergence guarantees for this $\bar{\pi}_T$.
* In most of the reviewer's discussions, the reviewer seems to confuse $\bar{\pi_{T}}$ with the chain distribution: choose a single initialization point $z_0$, run the algorithm for $(T+n)$ steps, select $z_{T+1}, z_{T+2},...,z_{T+n}$, and consider the empirical distribution generated from these samples.
*In a word, the main difference here is that we consider the underlying distribution of LMC while the reviewer seems to consider one sample trajectory of this random process, i.e., a particular instantiation of the LMC path.*
The reviewer's misunderstanding is best illustrated by considering the statement:
> The other is where you start in the tails between two or more modes and there is randomness as to which mode you will find. In this case, your result does not make sense. There will be a non-negligible probability associated with two (or more) modes. And for any run, you can only approximate one of these well in practice.
Contrary to what the reviewer claims, in the above example, $\bar{\pi_{T}}$ **can indeed approximate all of the modes well**. Even if a particular **instantiation of the LMC path** only visits one mode, the **underlying distribution $\bar{\pi_{T}}$** can well-approximate each mode (conditional on the support of the mode).
Based on this clarification, we address other concerns of the reviewer below:
> "In practice (for large c and finite T) the chain is likely to spend all its time in S or none of it in S". So half the time it will approximate pi restricted to S well, and the other half of the time it will not.
It is unclear to us what "the chain is likely to spend all its time in S" means, but we conjecture that the reviewer is talking about "if you initialize the LMC from $0$, and look at one trajectory of LMC, it will stay in S all the time (or never)", and thus this concern is due to that the reviewer confuse $\bar{\pi_{T}}$ with the chain distribution. As pointed out above, our Corollary 1 is for the averaged underlying distribution $\bar{\pi_{T}}$ of the iterates, instead of the empirical distribution formed by iterates within one trajectory. Therefore, that "the chain spends all its time in S in one trajectory" does not indicate $\bar{\pi_{T}}(S)$ is large, and Corollary can not be applied to show that "half the time it will approximate pi restricted to S well".
> What you want the theory to stay is "given that the chain spends a substantial proportion of its time in S then it will approximate the conditional distribution of pi restricted to S well" but this is not what the result says.
Neither our motivation nor our result has anything to do with "the time the chain spent in S" and thus we are confused that why this is "what we want". Still, we conjecture that the reviewer thinks that "$\bar{\pi_{T}}(S)$ is the chain distribution. If the chain spends a substantial proportion of its time in S, then $\bar{\pi_{T}}(S)$ is large and thus Corollary 1 can be applied". However, this is not true as $\bar{\pi_{T}}$ is not the chain distribution (see discussion above) and we do not intend to claim anything based on the chain distribution. What we want has been clearly stated in Corollary 1, i.e., we want the probability mass of the **underlying** distribution $\bar{\pi_{T}}$ to be small, or the conditional **underlying** distribution well approximating the conditional target distribution.
Finally, we ask that the reviewer explain what he means by "does not make sense", as it is difficult to respond to a vague English statement. Does the reviewer mean
1. Corollary 1 must be wrong, or
2. Corollary 1 is vacuous?
In the case of 1, we ask that the reviewer to please point to a mistake in our proof, or alternatively present a counter-example, with the density in question described clearly in mathematical notation.
In the case of 2, we have already explained the significance of our result. Other reviewers have also noted that our bounds are quite relevant to understanding non-convex samples, e.g. Reviewer Edvq notes that *"In particular, the mixing rates seem to offer very good insight into the phenomenon of “metastability”, i.e. that particular modes may be well-explored while the global structure is not correct."*. | Summary: The paper studies MCMC algorithms like the Langevin dynamics and Gibbs sampler on non-log-concave distributions. Many natural distributions are non-log-concave and multimodal, for example, mixtures of Gaussians and the posterior distribution of Gaussian mixtures. While classical results show that MCMC algorithms suffer from slow mixing on such multimodal distributions, the paper shows that when isoperimetric inequalities such as Poincare or log-Sobolev hold on a subset X of the state space, the conditional distribution of the MCMC iterate over X mixes fast to the conditional distribution of the target distribution on X. Thus, the paper shows that while MCMC algorithms converge to the true global distribution slowly, it can still converge very fast locally. For example, on a mixture of two isomorphic Gaussians, the Langevin dynamics (LMC) converges to the true conditional distribution around each mode but might put the wrong weight on the two Gaussian components. i.e. the true distribution is mu = 1/2 N(-u, sigma^2) +1/2 N (u, sigma^2) but the distribution of the LMC might be 1/3 N(-u, sigma^2) + 2/3 N(u,sigma^2). To show these results, the paper uses that for any target distribution mu, including non-log-concave ones, the LMC quickly converges to a distribution nu whose Fisher information to mu is small [Balasubramanian, Chewi, Erdogdu, Salim, Zhang—PMLR’22], then uses isoperimetric inequalities for the conditional distribution of mu on subsets of the state space to show that if the conditional distribution of mu on S satisfies isoperimetric inequality, then either nu puts small mass on S or the conditional distribution of nu on S is close to that of mu on S in Kullback-Leiber or chi-square distance (see Lemma 1, Corollary 1 for the case when the conditional distributions satisfy log-Sobolev inequalities, and Lemma 2, Theorem 2 and Corollary 2 for the case when the conditional distributions satisfy the weaker Poincare inequalities). Note that this is essentially the best statement one can hope for: for example, if mu is a mixture of two Gaussians whose centers are very far apart, and the LMC is initialized at the center of the first Gaussian, then the LMC will put almost all mass on the first Gaussian component and almost 0 mass on the second Gaussian component and one cannot have any reasonable guarantee about the conditional distribution of the LMC iterate on regions around the mode of the second component. In Theorem 2, the paper proves an analogous result for Gibbs sampler on discrete state space such as the hypercube. For a distribution mu supported on the hypercube {0,1}^d, the Gibbs sampler operates by picking one random coordinate and flipping the value at that coordinate from 0 to 1 or 1 to 0 with probabilities chosen so that mu is the stationary distribution (see Section 5). The paper shows that if the vertices of the hypercube can be partitioned into subsets X_1,.., X_m such that on each subset, the Markov chain induced by the Gibbs sampler has a large spectral gap, then either the distribution nu produced by the Gibbs sampler puts small mass on X_i or the conditional of nu on X_ is close to that of mu on X_i.
Strengths: The paper shows an interesting result using relatively simple techniques. While fast convergence of LMC iterates in Fisher information for general non-log-concave distribution and local isoperimetric inequalities are known and used in previous works (see [Balasubramanian, Chewi, Erdogdu, Salim, Zhang—PMLR’22] and [Mou, Ho, Wainwright, Bartlett, Jordan’2019]), the paper cleverly combines these two ingredients together to show local mixing of familiar MCMC algorithms likThe paper shows an interesting result using relatively simple techniques. While fast convergence of LMC iterates in Fisher information for general non-log-concave distribution and local isoperimetric inequalities are known and used in previous works (see [Balasubramanian, Chewi, Erdogdu, Salim, Zhang—PMLR’22] and [Mou, Ho, Wainwright, Bartlett, Jordan’2019]), the paper cleverly combines these two ingredients together to show local mixing of familiar MCMC algorithms like the LMC and Gibbs sampler on multimodal distributions. The main results appear to be novel and correct. I verify the proof of Lemma 1, Corollary 1, and Lemma 3.e the LMC and Gibbs sampler on multimodal distributions.
Weaknesses: The result on mixtures of Gaussians requires the assumption that the Gaussian components share the same covariance. It’s unclear if this assumption is natural, and the author(s) don’t give any justification for this assumption.
The paper has a few typos. Details below.
- Proof of Lemma 1, supplement, appendix A: The claim is either $\mu(S) \leq sqrt{\epsilon}/sqrt{\alpha}$ or $Ent_{\pi|S}(\mu|S || \nu|S )\leq sqrt{\epsilon}/sqrt{\alpha}$ but the proof instead show that either $\mu(S) \leq \sqrt{\epsilon}$ or $Ent_{\pi|S}(\mu|S || \nu|S )\leq sqrt{\epsilon}/\alpha.$ The fix is simple, in line 416, it should be $\mu(S)\leq \sqrt{\epsilon}/\sqrt{\alpha}$ and in line the following displayed equation, the rhs should be $\epsilon/(\sqrt{\epsilon}/\sqrt{\alpha}) = \sqrt{\epsilon \alpha}.$
- In proof of Lemma 3, supplement, appendix C.1: c is undefined, though I believe c = min_i w_i so that c\leq w_j and cp_j \leq w_j p_j holds in the first line of the proof. The proof, Lemma 3 only proves that p |S_j satisfies LSI with constant 1/min_i w_i sigma^{-2}, but lemma 3 claims that the LSI constant is max_i w_i/min_i w_i sigma^{-2}. However, 1 \leq m max_i w_i where m is the number of components/parts in the partition of the state space, so the bound the proof achieves is only worse than the claimed bound by a factor of m, which doesn’t significantly affect the result.
-In the statement of Lemma 2, in both the main paper and supplement, pi satisfies PFI -> pi satisfies PI.
-In section 4.3.1 of the main paper, line 210, P is undefined, but I believe the author(s) mean U.
-In Lemma 5, lines 239 and 240 of the main paper, the author claims that the target distribution satisfies local LSI, but its proof (Lemma 13, line 467 of the supplement) states that the target distribution only satisfies local Poincare inequalities.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: -For mixtures of Gaussians, is the assumption that Gaussian components have the same covariance matrix necessary? Is this assumption natural?
-The quasi-concave condition for the Gibbs sampler to have a large spectral gap appears rather unnatural. The author(s) could consider investigating natural multimodal discrete distributions such as the Ising/Curie-Weiss model at low temperatures (see [Levin-Luczak-Peres—Probability and Related Fields’2010]).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and positive feedbacks. Your concerns are correspondingly addressed as follows.
**Q1**: Typos & mistakes.
**A1**: We thank the reviewer for pointing out the typos. We will update our draft with the following changes:
a. Lemma 1: In line 416, it should be $\mu(\mathcal{X}_i) < \sqrt{\frac{\epsilon}{\alpha}}$, and rhs of the following equation should be $\sqrt{\epsilon\alpha}$.
b. Lemma 2: $\pi$ should obey $PI(\rho)$ instead of $PFI(\rho)$ in both the main paper and supplement.
c. Lemma 3: In Lemma 3, $c$ should be $\min_{i} w_i$. The result should be $\pi\vert S_i$ obeys $LSI(\frac{\sigma^{-2}\min_i w_i}{m \max_i w_i})$ or just $LSI(\sigma^{-2}\min_i w_i)$, where $m$ is the number of components in the partition of the state space.
d. Section 4.3.1: In line 210 and 211, $P$ should be $U$, the same as previous notation.
e. Lemma 5: Here LSI is a mistake in line 239 and 240. It should be PI, which is consisitent with our proof in the supplement and with following bounds on variance instead of entropy.
**Q2**: Assmuption of Gaussian components having the same covariance matrix
**A2**: We thank the reviewer for pointing this out. That the components share the same covariance is necessary for our analysis. Extending the result to mixtures with different covariance matrix is possible but highly nontrivial. We mainly considered and tried two ways to tackle this more general situation:
1. Find a partition of sublevel sets that are all convex. In this way, we tried to use the same partition as we have in our paper. However, the assumption of Gaussian components having the same covariance matrix is important in our proofs, as it enables us to decompose the sample space into convex subsets.
If we allow different covariances, the sub-level sets (i.e. $S_i = \{x: p_i(x) \geq p_j(x), \forall j \neq i\}$) can no longer be convex. As an example, consider the following mixture of gaussian over $\mathbb{R}^2$: $p=\frac{1}{2}p_1+\frac{1}{2}p_2$, where $p_1\propto \mathcal{N}(0,\mathbb{I})$ and $p_2\propto \mathcal{N}((1,0),2\mathbb{I})$, the set $\{x: p_2(x)\ge p_1(x)\}$ can be viewed removing an ellipsoid section from the entire space, which is not convex. Therefore, Bakry-Émery criterion can no longer be applied to derive Logarithmic Sobolev Inequality. It may be still possible to derive local Logarithmic Sobolev Inequality by considering other partitions, which we leave as a future work.
2. Use identical Gaussians to approach to mixtures of K non-homogeneous Gaussians. More generally, a more complex distribution, such as a mixture of K non-homogeneous Gaussians can be approximated by a N identical Gaussians, for some N > K. The price for this reduction is increasing the number of components; but we have not worked out the optimal tradeoff for how large N needs to be.
We acknowledge that sampling from more complicated mixtures is an important question for future work, and thank the reviewer for raising this point.
**Q3**: Quasi-concave condition for the Gibbs sampler seems unnatural
**A3**: We first explain our intuition in the quasi-convex condition. We aimed to identify a discrete analogue of log-concavity. In order to define concavity on a discrete space, we naturally resort to geodesic-concavity, in which all function super-level sets are convex with respect to geodesics (i.e. shortest paths). In our paper we show that such a condition can lower bound the spectral gap of Glauber dynamics, and hence previous analyses can readily apply.
We appreciate the reviewer's refernce to closely-related and interesting work. It is possible that we generalize the CW model to multimodal setup, maybe via increasing the rank of the energy function. We think this can be an interesting direction but we won't be able to finish this within the Neurips timeline.
Thank you once again for taking the time to review our paper. We sincerely hope that the response to your concerns, as well as the overall response to other reviewers’ concerns, helps assuage your concerns, and view this paper in a more favorable light.
---
Rebuttal Comment 1.1:
Title: Replying to rebuttal
Comment: Thank you for your detailed response. | Summary: This work studies the convergence of MCMC algorithms for sampling from non-log-concave distributions. This is much less well-understood than the log-concave setting. The authors introduce the notion of conditional mixing, this occurs when the markov chain is close to the true (conditional) distribution when conditioned on being in some specific part of the space. They give sufficient conditions under which conditional mixing occurs. They give applications to sampling from mixture distributions of gaussians and related problems.
Strengths: The authors propose a new framework to go beyond log-concave sampling, an important problem in the sampling literature and give evidence for its utility. They show that in some cases conditional mixing can appear very quickly, whereas global mixing provably takes much longer.
The paper is overall well-written.
Weaknesses: I think non-expert readers could benefit from a high-level technical overview.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Regarding sampling from a Gaussian mixture distribution: Could you mention where your techniques break down when allowing differing covariances for the mixture distributions?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Maybe I missed this, but I do not think limitations were mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and positive feedback. Your concerns are correspondingly addressed as follows.
**Q1**: Lack of high-level technical overview.
**A1**: We thank the reviewer for the suggestion. We will expand our related work to provide more overview to the problem. Due to limited space, we can not copy the entire section, but we give an outline below:
a. MCMC, LMC, and their applications.
b. Theoretical guarantees for LMC, and that PI / LSI / log-concavity can lead to fast mixing.
c. Hardness result for LMC without log-concavity, evidenced by two mixtures of Gaussian.
d. Results on non-log-concave distributions.
e. discrete space MCMC.
**Q2**: Difficulties for allowing different covariances for the mixture distributions.
**A2**: Our current analysis is based on one important fact: the sublevel sets(i.e. $S_i = \{x: p_i(x) \geq p_j(x), \forall j \neq i\}$) are convex when the covariances are identical because the sublevel sets can be viewed as a Voronoi diagram after affine transformation. However, if we allow each mixture to have different covariance, the sub-level set may be no longer convex. As an example, consider the following mixture of Gaussian over $\mathbb{R}^2$: $p=\frac{1}{2}p_1+\frac{1}{2}p_2$, where $p_1\propto \mathcal{N}(0,\mathbb{I})$ and $p_2\propto \mathcal{N}((1,0),2\mathbb{I})$, the set $\{x: p_2(x)\ge p_1(x)\}$ can be viewed removing an ellipsoid section from the entire space, which is not convex. Therefore, Bakry-Émery criterion can no longer be applied to derive Logarithmic Sobolev Inequality. It may be, however, still possible to derive local Logarithmic Sobolev Inequality by considering other partitions. This is highly nontrivial and we leave it as a future work.
**Q3**: No limitations were mentioned.
**A3**: Thanks for pointing it out. We include limitations in the revised manuscript. For example, some limitations of this paper include:
1. lack of guarantee that our local mixing rate is sharp;
2. lack of a principled way to derive local isoperimetric inequalities beyond mixture of Gaussians with identical covariances.
3. lack of additional potential implications of fast conditional mixing.
Thank you once again for taking the time to review our paper. We sincerely hope that the response to your concerns, as well as the overall response to other reviewers’ concerns, helps assuage your concerns, and view this paper in a more favorable light.
---
Rebuttal Comment 1.1:
Comment: Thank you for your very detailed response. | Summary: This paper, following the framework of Balasubramanian et al., shows that for target distributions that are non-log-concave, isoperimetric inequalities on subsets of the state space will yield fast mixing for the conditional distributions of MCMC on that space. This adds formal justification for the observed phenomena of metastability and “local convergence”. Additionally, some analogous conditional convergence results are also reported for a random walk on a hypercube.
Strengths: The definition and usage of these local isoperimetric conditions is entirely novel, as well as the local mixing results. In particular, the mixing rates seem to offer very good insight into the phenomenon of “metastability”, i.e. that particular modes may be well-explored while the global structure is not correct.
The paper is well illustrated with examples such as the Gaussian mixture and the posterior sampling example, with transparent computation of the constants. In particular, it is easy to see when the conditions of the paper hold, and to compute the resulting rate estimates.
The hypercube sampling result is novel, although it is difficult to assess this in the context of existing literature since the assumptions/results differ greatly.
Experimental evidence is also provided to quantify the phenomena of local convergence.
Weaknesses: The rate of local mixing in both local LSI and local Poincare cases are quite bad. See my remark in the “Questions” section.
The discrete state space case is difficult to assess compared to other results for discrete space MCMC. If this is to be a main result in the paper, I would recommend a more detailed survey of the literature on discrete space MCMC with relevant comments be included in the Related Work.
The experiments are not surprising and seem to illustrate the same phenomena as seen in earlier works on multimodal sampling, but I would argue that they are still fairly useful illustrations in the context of this paper. In my opinion, the experiments section could be shortened.
This paper could benefit from some proofreading to catch grammatical mistakes, of which I was able to find quite a few.
To summarize, I feel that this paper makes unique contributions to the theory and intuition of MCMC algorithms, and the issues with it are relatively minor. Therefore I am happy to recommend acceptance.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: The local mixing seems to be at a significantly worse rate than the global mixing under e.g. LSI, which we would expect to be something like $d/\epsilon$ at least for the LSI case. On the other hand, we obtain something more like $d^2/\epsilon^4$. Is there a principled reason for this difference and could the rate potentially be improved, or is the analysis expected to be somewhat sharp.
In some prior works, e.g. [6, 23], Renyi divergence is considered as a natural “measure” of convergence, due to the analytical simplicity of the resulting expressions. Would analogous results hold for this case? Furthermore, is there any barrier to considering e.g. Latala-Oleszkiewicz or other isoperimetries in the analysis? I would imagine so given that the “Fisher information” would now need to correspond to an inequality of LO-type.
How does the discrete sampling result compare to existing MCMC results in that field? See my comment in “Weaknesses”.
Typos:
Fisher in Fisher information should always be capitalized
L. 74 space after Balasubramanian
L. 81 distribution -> distributions
L. 82. Multimode -> multimodality
L. 106 “the Langevin Monte Carlo algorithm”, function inequalities -> functional inequalities
The definition of a Lipschitz condition should be given explicitly.
Some in-line equations are a bit difficult to read and should be presented as displays, e.g. L. 231 or L. 252.
L. 286 “D” -> “Appendix D”
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: None beyond those raised in earlier sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to read our paper, and for the comments and suggestions. Here are our main responses:
**Q1**: I would recommend a more detailed survey of the literature on discrete space MCMC with relevant comments be included in the Related Work.
**A1**: Thank you for the suggestion. We will provide a more detailed literature review on discrete space MCMC in Section 2 of the revised paper.
**Q2**: The experiments section could be shortened.
**A2**: Thank you for the suggestion. We acknowledge that the current experiment section is a bit redundant, and will make the experiments section more condensed in the revised paper.
**Q3**: The local mixing seems to be at a significantly worse rate than the global mixing under e.g. LSI. Is there a principled reason?
**A3**: Thank you for the insightful question. We do not intend to claim that our rate to be sharp, and the local mixing rate may possibly be be improvable. Our main goal is to derive a mixing rate that is _polynomial over dimension_. Investigating the optimal local mixing rate is definitely an interesting future direction.
**Q4**: Can the analysis be extended under Renyi divergence?
**A4**: We acknowledge that Renyi divergence is a good measure of convergence, and have tried to extend our analysis under Renyi divergence but failed. In the analysis for Renyi divergence, the analogy for Fisher information $FI(\pi_t||\pi)$ is $G_{q,\pi}(\pi_t)/F_{q,\pi} (\pi_t)$. However, there is no clear relationship between $G_{q,\pi}(\pi_t)/F_{q,\pi} (\pi_t)$ and its local version due to the existence of $F_{q,\pi} (\pi_t)$ in the denominator. Therefore, even if we can show that $G_{q,\pi}(\pi_t)/F_{q,\pi} (\pi_t)$ is small, we are unable to convert it to its local version and apply local isoperimetry.
**Q5**: Is there any barrier to considering Latala-Oleszkiewicz isoperimetry?
**A5**: Thank you for the suggestion. To our knowledge, existing analyses under Latala-Oleszkiewicz isoperimetry uses Renyi divergence, and the same problem in A4 will occur.
Finally, thank you for pointing out the typos and minor suggestions, they have been duly revised.
We sincerely hope that the response to your concerns, as well as the overall response to other reviewers’ concerns, helps assuage your concerns, and view this paper in a more favorable light.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their responses. I still have a number of concerns that I would like to be addressed.
**A1**: I am still curious about the relevant discrete space results. Is the rate obtained here anywhere close to "optimal" in this setting? What is the expected result? Please comment if possible.
**A3**: From a closer inspection, I see that using Proposition 1 means that $d^2/\epsilon^2$ is the best possible rate, while Lemma 1 is adding the factors of $\epsilon^2$. I would expect then that this approach is not sharp, although I don't currently have any ideas about how it could be improved. Nonetheless I hope the authors continue to investigate this question in future works.
Regardless, I feel that this work presents some novel and genuinely useful intuition regarding the phenomena of local convergence in sampling. So long as there are improvements in the presentation of the final draft, I am happy to raise my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for raising the score and the recognition of our contribution. We will improve the presentation and content of the final draft as promised. Below we address the additional concerns.
**Q6**: Is the rate obtained here anywhere close to "optimal" in this setting? What is the expected result?
**A6**: We thank the reviewer for this question. The discrete analysis extends the continuous one, and hence we are not aware of previous works that we can directly compare against.
However, a quick sanity check can be a simple random walk on graphs with radius D. This would correspond to a Gibbs sampling with uniform distribution. In this case, we know that the spectral gap is up to log factor bounded by 1/D^2, giving a similar rate as we predicted. Hence, although the assumptions may be relaxed and other dependence may be improved, the D dependence is probably tight in most setups.
**Q7**: Whether the mixing rate could be improved?
**A7**: We thank the reviewer for your thoughtful comment. As pointed out by the reviewer, there is an additional $\varepsilon^2$ term in Lemma 1. Such a term stems from transferring global FI to local FI, where our approach is straightforward and thus we conjecture that it can be further improved. However, we have to admit that we are still not aware of how to transfer global FI to local FI painlessly or if there is any approach to estimate local FI directly. We believe that this is an interesting question to study, and will continue to investigate this question in future works as suggested by the reviewer. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
RFold: RNA Secondary Structure Prediction with Decoupled Optimization | Reject | Summary: In this paper, the authors propose a way to decouple the optimization process of RNA secondary structure prediction. Specifically, they decompose the constraint satisfaction problem into row-wise and column-wise optimization. Instead of hand-crafted features, attention maps are used to learn the pair-wise interactions of the nucleotide bases.
Strengths: 1. The authors show that it is more effective to use the attention maps as the input and then use U-Net to predict H, compared with using the hand-crafted features as input, which is interesting.
2. The proposed method reduces the inference time dramatically compared to various methods.
3. It achieves promising results on the RNAStralign dataset and large-scale benchmark evaluation.
Weaknesses: 1. The generalization ability is limited because the proposed method cannot achieve the best recall on ArchiveII and bpRNA-TS0 datasets.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Could the authors use experiments to show that strict constraints cause the less promising generalization ability?
Is it possible to improve the proposed method to gain better generalization ability?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: I did not find the potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer FGw2,
Thank you for your constructive and insightful comments! We appreciate the time and effort you've put into this review and would like to sincerely address your concerns below:
***
**Q1** The proposed method cannot achieve the best recall on ArchiveII and bpRNA-TS0 datasets. Is it possible to improve the proposed method?
**A1** The stringent constraints imposed by our model may result in a lower recall metric, suggesting that some uncertain pairs might be ignored due to these strict rules. In contrast, our precision metrics on ArchiveII and bpRNA-TS0 datasets are much higher, indicating that when RFold predicts a positive, it is likely to be accurate.
Your question is constructive and valuable! The low recall of RFold may be due to the existence of abnormal samples. Although we have illustrated three hard constraints, some abnormal samples that do not satisfy these constraints do exist in practice. After analyzing the datasets used in this paper, we found that there are some abnormal samples in the test set. The ratio of valid samples in each dataset is summarized below:
| Dataset/Method | Validity |
|-------------|:---------:|
| RNAStralign | 93.05% |
| ArchiveII | 96.03% |
| bpRNA | 96.51% |
As shown in Table 6 of our manuscript, RFold enforces a validity of 100.00%, while other methods, like E2Efold, only achieve about 50.31%. RFold, therefore, more accurately reflects the real situation.
We introduce a more flexible solution, soft-RFold, detailed in **Appendix B**. By incorporating a checking mechanism and adjusting the confidence threshold, soft-RFold is capable of enhancing the recall metric. Specifically, if the confidence given by the Rol-Col Softmax is low, we do not perform Rol-Col Argmax and assign more "1". The checking mechanism can be implemented as the following pseudo-code:
```
y_pred = row_col_softmax(y)
int_one = row_col_argmax(y_pred)
# get the confidence for each position
conf = y_pred * int_one
all_pos = conf > 0.0
# select reliable position
conf_pos = conf > thr1
# select unreliable position with the full row and column
uncf_pos = get_unreliable_pos(all_pos, conf_pos)
# assign "1" for the positions with the confidence higher than thr2
# note that thr2 < thr1
y_pred[uncf_pos] = (y_pred[uncf_pos] > thr2).float()
int_one[uncf_pos] = y_pred[uncf_pos]
```
We conduct experiments to compare the soft-RFold and the original RFold in the RNAStralign dataset:
| Method | Precision | Recall | F1|
|-------------|:---------:|:---------:|:---------:|
| RFold | 0.981 | 0.973 | 0.977 |
| soft-RFold | 0.978 | 0.974 | 0.976|
It can be seen that soft-RFold improves the recall metric by a small margin. The minor improvement may be because the number of abnormal samples is small.
We then select those samples that do not obey the three constraints to further analyse the performance. The total number of such samples is 179.
| Method | Precision | Recall | F1|
|-------------|:---------:|:---------:|:---------:|
| RFold | 0.956 | 0.860 | 0.905 |
| soft-RFold | 0.949 | 0.889 | 0.918|
It can be seen that soft-RFold can deal with abnormal samples well. The improvement of the recall metric is more obvious.
Our soft-RFold solution is quite intuitive, and we believe there is much room for exploration. We appreciate your valuable suggestion and will address this issue as part of our future work. Thanks a lot!
***
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarification provided regarding the recall. It addressed my concerns. | Summary: The paper introduces RFold for RNA secondary structure prediction (a prediction of LxL binary matrix). It proposes to add a row-column-wise softmax at output of the model, before computing the L2 loss with respect to the ground truth. The experimental results show higher precision and recall compared to prior works.
Strengths: - The paper is well-written with necessary backgrounds and basic introductions, problem formulations. Thus, this paper suited well for general audience of NeurIPS.
- RFold delivers strong performances in two commonly used datasets for evaluating RNA secondary structures.
Weaknesses: The main concern is the limited novelty, RFold is incremental over Ufold. Both methods follow the paradigm of mapping a sequence RNA (using $\theta_{1}$) to $[L \times L \times n]$ features and further mapping the $[L\times L \times n]$ features (using $\theta_{2}$) to $[L \times L \times 1]$ output prediction. RFold differs from Ufold in two parts:
- RFold proposes $\theta_{1}$ to be represented by an attention-based layer. (RFold and Ufold use a similar, if not identical, $\theta_{2}$.)
- RFold adds a column-wise and a row-wised softmax after the $\theta_{2}$, before computing L2 loss.
Thus, RFold makes a few architectural modifications and improves the results.
E2Efold and Ufold also have a section on evaluation with pseudoknots on the RNAStralign test dataset, which this submission does not have.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper has no section for limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer G61p,
Thank you for your thoughtful comments. We hope to address your concerns through the following responses.
***
**Q1** Novelty may be limited.
**A1** Thank you for your efforts in the review process. Please allow us to elaborate on our work.
In general, we can divide deep-learning-based RNA secondary structure prediction methods into three key components: pre-processing, the backbone model, and post-processing:
* The pre-processing step means projecting the 1D sequence into 2D matrix. (**1D -> discrete 2D**)
* The backbone model learns from the 2D matrix and then output a hidden matrix of continuous values. (**discrete 2D -> continuous 2D**)
* The post-processing step converts the hidden matrix into a contact map, which is matrix of discrete 0/1 values. (**continuous 2D -> discrete 2D**)
To provide a clear methodology comparison, we have summarized the mainstream deep-learning-based methods based on the above key components. The definitions of constraints (a-c) used here are consistent with those in the paper:
| Method | SPOT-RNA [1] | E2Efold [2] | UFold [3] | RFold |
| --- | --- | --- | --- | --- |
| Pre-Processing | Pairwise concat | Pairwise concat | Pairwise concat + implicit matching [4] | Seq2map |
| Backbone model | CNN + LSTM | Transformer | U-Net | U-Net |
| Post-Processing | Sigmoid($\cdot$)| Unrolled Algorithm | Unrolled Algorithm | Decoupled Optimization |
| Constraint (a) | ✖️ | ✔️ | ✔️ | ✔️ |
| Constraint (b) | ✖️ | ✔️ | ✔️ | ✔️ |
| Constraint (c) | ✖️ | ✖️ | ✖️ | ✔️ |
For the **pre-processing**, RFold is the unique approach that does not require explicit hand-crafted features produced by pairwise concat or implicit matching.
For the **post-processing**, RFold propose a novel decoupled optimization to satisfy all constraints. While previous prevalent approaches such as E2EFold and UFold can only approximate, *they cannot guarantee satisfaction of constraint (c)*.
Although RFold uses the same backbone model as UFold, the other two key components are significantly different from UFold. We admire the excellent work of UFold, which introduces more complex pre-processing and a stronger backbone model based on E2Efold to achieve strong performance. However, constraint (c) has not been addressed by UFold.
Our work introduces decoupled optimization to satisfy all constraints, and proposes Seq2map to simplify the previously complex pre-processing, thereby achieving efficient RNA secondary structure prediction. *To the best of our knowledge, our proposed RFold is the first to solve the optimization problem that satisfies all constraints. Moreover, ours is the first approach to automate pre-processing rather than relying on manual design.*
Additionally, as displayed in Table 6 of our manuscript, we noticed that the unrolled algorithm proposed by E2Efold, although it performs well on base-level metrics (precision, recall, and F1-score), only achieves about 50% on the sample-level validity metric. In contrast, our RFold method achieves high scores in both base-level and sample-level metrics.
We believe that our proposed RFold is novel in RNA secondary structure prediction.
***
**Q2** Evaluation with pseudoknots on the RNAStralign test dataset.
**A2** Thank you for the thoughtful suggestion regarding pseudoknot evaluation. Following E2Efold and UFold, we counted the number of pseudoknotted sequences predicted as pseudoknotted, reporting this as true positives. We selected all pseudoknot-containing sequences from the RNAStralign test set. The results are as follows:
| Method | Precision | Recall | F1 score |
| ---------------- | --------- | ------ | -------- |
| RNAstructure [5] | 0.778 | 0.761 | 0.769 |
| SPOT-RNA [1] | 0.677 | 0.978 | 0.800 |
| E2Efold [2] | 0.844 | 0.990 | 0.911 |
| UFold [3] | 0.962 | 0.990 | 0.976 |
| RFold | **0.971** | **0.993** | **0.982** |
Our proposed RFold exceeds previous state-of-the-art UFold across precision, recall, and F1 score, highlighting the effectiveness of our approach for modeling pseudoknotted structures.
***
[1] Singh, Jaswinder, et al. RNA secondary structure prediction using an ensemble of two-dimensional deep neural networks and transfer learning. Nature communications, 2019.
[2] Chen, Xinshi, et al. RNA secondary structure prediction by learning unrolled algorithms. ICLR, 2020.
[3] Fu, Laiyi, et al. UFold: fast and accurate RNA secondary structure prediction with deep learning. Nucleic acids research, 2022.
[4] Zhang, Hao, et al. A new method of RNA secondary structure prediction based on convolutional neural network and dynamic programming. Frontiers in genetics, 2019.
[5] Mathews, David H., et al. Prediction of RNA secondary structure by free energy minimization. Current opinion in structural biology, 2006.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful suggestions again. We have included a more comprehensive comparison and experimental evidence as you suggested. We want to check in again and inquire if there are any other concerns we can address for you in order to potentially increase the evaluation score.
---
Reply to Comment 1.1.2:
Comment: Dear reviewer,
We sincerely appreciate your valuable feedback and timely response.
As the deadline for the author-reviewer discussion phase is approaching, we would like to check if you have any other remaining concerns about our paper. If our responses have adequately addressed your comments, we kindly request that you consider increasing the score.
We sincerely thank you for your dedication and effort in evaluating our submission. Please do not hesitate to let us know if you need any clarification or have additional suggestions.
Best Regards,
Authors. | Summary: This work presents an efficient and accurate approach for end-to-end RNA secondary structure prediction.
The optimization problem formulation and its solution are well defined.
The results are strong and supported by visualizations and ablation studies.
Strengths: The key strengths of this work are:
1. Inference is an order of magnitude faster than previous methods.
2. Inference is between 4-20% more accurate than previous methods, with significant gains specifically in long-range interactions.
3. A well defined optimization problem formulation and solution.
4. Including visualizations and ablation studies underscores the gains achieved through the optimization formulation and attention architecture.
5. The results are validated using multiple datasets and baselines.
Weaknesses: Weaknesses of this work are:
1. There is a discrepancy in the definition of G in equation 12, where it does not incorporate the softmax function.
However, in equation 15, it is assumed as if it does. This can be fixed by introducing a new notation, such as G_{hat},
which includes the softmax function and will ensure consistency.
2. The definition of well-known metrics in section 5 is redundant.
3. The comparison between Rfold and Ufold could be more comprehensive,
describing their similarities and differences.
4. There are a few minor typos, and the writing may be improved.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are the key similarities and differences between Rfold and Ufold?
Visualizing performance as a function of sequence length may be useful.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer RVaK,
Thank you for your thoughtful and inspiring comment!
***
**Q1** The key similarities and differences between Rfold and Ufold?
**A1** Thank you for your meticulous review and insightful question! We can divide general deep-learning-based RNA secondary structure prediction methods into three key parts: pre-processing, the backbone model, and post-processing:
* The pre-processing step means projecting the 1D sequence into 2D matrix. (**1D -> discrete 2D**)
* The backbone model learns from the 2D matrix and then outputs a hidden matrix of continuous values. (**discrete 2D -> continuous 2D**)
* The post-processing step converts the hidden matrix into a contact map, which is a matrix of discrete 0/1 values. (**continuous 2D -> discrete 2D**)
Here, we will only introduce UFold and RFold. If you are interested in comparing other methods, please refer to the global response.
We summarize the methodology comparison between UFold and RFold in the table below. The definitions of constraints (a-c) used here are consistent with those in the paper:
| Method | Pre-Processing | Backbone model | Post-Processing | Constraint (a) | Constraint (b) | Constraint (c) |
| -------- | -------------- | -------------- | --------------- | -------------- | -------------- | -------------- |
| UFold [1]| Pairwise concat + implicit matching [2] | U-Net | Unrolled Algorithm | ✔️ | ✔️ | ✖️ |
| RFold | Seq2map | U-Net | Decoupled Optimization | ✔️ | ✔️ | ✔️ |
We delineate the key similarities and differences between RFold and UFold below:
**Similarity**:
*Backbone model*: Both RFold and UFold use the U-Net architecture as the backbone model.
**Dissimilarity**:
*Pre-processing*: UFold requires preprocessing of the RNA sequence into hand-crafted features, whereas RFold does not. Specifically, the input of UFold's U-Net is represented as a $17 \times L \times L$ matrix for a given RNA sequence of length $L$, while the input of RFold's U-Net is the feature map of $1 \times L \times L$ from the Seq2map Attention. RFold automatically obtains the feature map in a lightweight way.
*Post-processing*: UFold employs the same post-processing strategy as E2Efold [3], which utilizes an unrolled algorithm. This approach, however, does not guarantee the satisfaction of the three important constraints. In contrast, RFold employs a decoupled optimization approach to satisfy these constraints in a simple and efficient manner.
It can be seen that RFold only shares a similarity with UFold in the use of U-Net as the backbone model. The crucial aspects of pre-processing and post-processing differ significantly between the two.
***
**Q2** Paper writing: (i) A discrepancy in the definition of G in equation 12; (ii) The definition of well-known metrics in Eq.(19) is redundant; (iii) A few minor typos.
**A2** We apologize for any confusion our writing may have caused, and we sincerely appreciate your detailed and valuable suggestions. We will thoroughly revise our paper based on your constructive comments.
***
[1] Fu, Laiyi, et al. UFold: fast and accurate RNA secondary structure prediction with deep learning. Nucleic acids research, 2022.
[2] Zhang, Hao, et al. A new method of RNA secondary structure prediction based on convolutional neural network and dynamic programming. Frontiers in genetics, 2019.
[3] Chen, Xinshi, et al. RNA secondary structure prediction by learning unrolled algorithms. ICLR, 2020. | Summary: The paper proposes RFold, a simple and effective RNA secondary structure prediction algorithm. It adopts attention maps to learn informative representations for RNA rather than hand-crafted features. Then, based on a decoupled optimization process, RFold simplifies and guarantees satisfying the hard constraints on the formation of RNA secondary structure. Through the empirical experiments, the authors demonstrate that RFold achieves state-of-the-art performance with better computational efficiency compared to the previous works.
Strengths: - The proposed decoupled optimization seems simple, but surprisingly effective for RNA secondary structure prediction. To the best of my knowledge, the proposed method is novel in the domain and might be promising for the broader machine-learning community.
- The proposed method shows great performance in three RNA benchmark datasets outperforming the previous state-of-the-art method by a significant margin. Some issues need to be addressed regarding the experiment setup (please refer to the weaknesses), but the improved performance seems truly impressive.
Weaknesses: Major comments:
- [Data Split] To best approximate real-world applications that may require the prediction of novel structures, RNAs from the train/val/test set should bear minimal sequence and structural similarities. In contrast, it seems the authors have split datasets so that each RNA family has a similar fraction in each set. I think it may overestimate the true prediction performance of RFold. Likewise, “generalization to other datasets” experiments do not provide information about sequence/structure similarities between the datasets. If they are similar, it may not be a fair evaluation of generalization performance.
- Since the authors stated deep learning methods do not ignore the biologically essential structure such as pseudoknots, can you provide additional separate evaluation under the (non-) existence of pseudoknots?
- According to UFold, the bpRNA dataset contains mostly within family RNA species and does not adequately show the true generalization performance of the models. Can you provide additional evaluations with cross-family experiments?
- [Inference Time] It’s unclear whether the results are credible. The inference time can be quite different based on what type of machine (CPU, GPU, etc.) is used for the measurement. Since the other results seem to be excerpted from the UFold paper, the environments of UFold and RFold are likely to be different.
- [Reproduciblity] Architectural hyperparameters are missing. In addition, training codes do not seem to be included in the supplementary.
Minor comments:
- [Data Split] The authors stated that they split the RNAStralign dataset following the E2Efold paper. Can you confirm that all the methods including RFold used the same data splits? RNA sequences often have high sequence and structure similarities, so if you used different data splits it might affect the performance.
- As the authors stated, other algorithms often post-process the outputs to satisfy the constraints. Can you also show how the results are improved for RFold-E/S with the post-processing?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Do you have any plans for launching a web server for the proposed method? It would be difficult for many biotechnology researchers to set up the environment and run the algorithm. Therefore, while it is not mandatory, most compared algorithms support web servers for their models.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have not discussed the limitations of the work.
---Post-Rebuttal Comments---
I appreciate the authors' dedication evident in their comprehensive responses. They have effectively addressed many of the concerns I had about the paper. Overall, while some concerns persist, I am inclined to believe that by incorporating the authors' responses, the manuscript's quality would be improved. Hence, I've adjusted my rating to 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer RVaK,
Thanks for your professional and constructive comments! We respond to the questions as follows:
***
**Q1** The data split might overestimate the prediction performance.
**A1** We apologize for the confusion. We did not perform the data splitting ourselves. Rather, all the datasets used in this study come with their official splits, which we strictly adhered to in our experiments.
In line 239, we stated, "*Following [5], we split the RNAStralign dataset into training, validation, and testing sets by stratified sampling.*" This was intended to clarify how the RNAStralign dataset was split. In practice, we directly use the data split from E2Efold. As far as we know, UFold used the same data split for the RNAStralign dataset.
In summary, for both the RNAStralign and ArchiveII datasets, we followed the same data split as E2Efold and UFold, provided in the official E2Efold code. Regarding the bpRNA dataset, it comes with a predefined split (training on TR0, evaluating on VL0, testing on TS0), which we utilized directly in accordance with the official guidelines.
We are sorry for the confusion and will refine the manuscript to make it clear.
***
**Q2** Can you provide additional separate evaluations under the existence of pseudoknots?
**A2** Thank you for your insightful and professional suggestion! Here, we present the evaluation of pseudoknot structure prediction. Following E2Efold and UFold, we count the number of pseudoknotted sequences that are predicted as pseudoknotted and report this count as true positive. We pick all sequences containing pseudoknots from the RNAStralign test dataset. The results are as follows:
| Method | Precision | Recall | F1 score |
| ---------------- | --------- | ------ | -------- |
| RNAstructure [4] | 0.778 | 0.761 | 0.769 |
| SPOT-RNA [1] | 0.677 | 0.978 | 0.800 |
| E2Efold [2] | 0.844 | 0.990 | 0.911 |
| UFold [3] | 0.962 | 0.990 | 0.976 |
| RFold | **0.971** | **0.993** | **0.982** |
As the result demonstrates, RFold consistently surpasses UFold across all three metrics, indicating the effectiveness of our proposed approach.
***
**Q3** Can you provide additional evaluations with cross-family experiments?
**A3** Thank you for your professional insight! Initially, we did not include results from cross-family experiments as pure deep learning methods have struggled with this task. UFold, for instance, relies on the thermodynamic method Contrafold for data augmentation to achieve satisfactory results. Your valuable comment has made us realize the importance of including these results.
We have conducted an evaluation using cross-family RNA from the bpRNA-new dataset. Notably, the standard UFold method achieves an F1 score of 0.583, while our RFold approach reaches 0.616. When the same data augmentation technique based on Contrafold [5] is applied, UFold's performance increases to 0.636, whereas our RFold method yields a score of 0.651. This places RFold second only to the thermodynamics-based method, Contrafold, in terms of F1 score.
| Method | Precision | Recall | F1 score |
| --- | --- | --- | --- |
| E2Efold | 0.047 | 0.031 | 0.036 |
| SPOT-RNA | 0.635 | 0.641 | 0.620 |
| Contrafold | 0.620 | 0.736 | **0.661** |
| UFold | 0.500 | 0.736 | 0.583 |
| UFold + augmentation | 0.570 | 0.742 | 0.636|
| RFold | 0.614 | 0.619 | 0.616 |
| RFold + augmentation | 0.618 | 0.687 | ***0.651***|
***
**Q4** Unclear whether the inference time is credible.
**A4** We appreciate your thoughtful comment! For the comparison of inference time, we specifically rented an NVIDIA Titan Xp GPU to maintain consistency with the experimental setup of UFold. This information will be included in the revised manuscript.
***
**Q5** Architectural hyperparameters are missing. The supplement provides the inference code but not the training code.
**A5** We apologize for the lack of detailed architectural hyperparameters. For the seq2map attention, we employ a linear layer with a hidden size of 128. The U-Net backbone consists of four downsampling operations with max pooling in the encoding pathway, and four symmetric up-convolution blocks, each comprising an upsampling with a scale of 2 and a 2D convolution in the decoding pathway. The overall architecture aligns with that depicted in Figure 5 of the manuscript.
The inference code includes the essential components of RFold. As our paper is still under review, we are not providing the training code at this time. However, we plan to make the training code available in the future.
***
**Q6** Any plans for launching a web server?
**A6** Thanks for your constructive suggestion!
We should have open-sourced the inference code with pre-trained weights and Colab demo. All the experiments in this manuscript are reproduciable by simply running the code or the demo. However, according to the rules of NeurIPS, we are not allowed to share the links here. We will incorporate the establishment of a web server into our plan.
***
[1] Singh, Jaswinder, et al. RNA secondary structure prediction using an ensemble of two-dimensional deep neural networks and transfer learning. Nature communications, 2019.
[2] Chen, Xinshi, et al. RNA secondary structure prediction by learning unrolled algorithms. ICLR, 2020.
[3] Fu, Laiyi, et al. UFold: fast and accurate RNA secondary structure prediction with deep learning. Nucleic acids research, 2022.
[4] Mathews, David H., et al. Prediction of RNA secondary structure by free energy minimization. Current opinion in structural biology, 2006.
[5] Do, Chuong B., et al. CONTRAfold: RNA secondary structure prediction without physics-based models. Bioinformatics, 2006.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Comments
Comment: I appreciate the authors' dedication evident in their comprehensive responses. They have effectively addressed many of the concerns I had about the paper. Some of my remaining concerns are as follows:
**Regarding Q3:** In my opinion, it's crucial to incorporate the results from the cross-family experiments and acknowledge the inherent limitations of the study. One aspect I'd like to highlight is that the current version of the paper might slightly overemphasize the generalization performance of the proposed method (as seen in Sec 5.2). It would greatly enhance the paper if the authors could revisit this section, discussing both the method's generalization capabilities and its limitations with a more cautious approach.
**Regarding Q4:** I'm somewhat uncertain whether relying solely on the same GPU usage is sufficient to confidently assert the credibility of the inference time comparison. Given that numerous factors contribute to inference time and the time scale involved (< 1sec), results could be rather sensitive. Since the authors are exclusively focusing on inference time comparison, without factoring in training time, I believe it's imperative for them to measure the inference time across the compared methods in the same experiment setup. This will substantiate the claim regarding the efficiency of the proposed method.
**Overall**, while some concerns persist, I am inclined to believe that by incorporating the authors' responses, the manuscript's quality would be improved. Hence, I've adjusted my rating to 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your valuable feedback. We're pleased to note that several concerns have been addressed and truly appreciate your insightful and constructive comments!
We will make detailed revisions based on your feedback in the final version.
Best regards,
Authors | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their insightful and constructive feedback on our manuscript. We are encouraged by their recognition of our work as being **interesting and promising** (Reviewer RVaK, ZyLM, FGw2). Furthermore, the fact that they regard our methodology as **novel in the domain** (Reviewer RVaK) and **effective** (Reviewer RVaK, ZyLM, FGw2) is particularly encouraging. Additionally, the reviewers appreciated the **comprehensiveness of our experiments** (Reviewer RVaK, ZyLM, G61p, FGw2), and commented favorably on the **clarity of our presentation and the well-defined nature of our formulation** (Reviewer G61p, ZyLM).
In response to feedback, we provide detailed responses to address each reviewer’s concerns point by point. The response mainly includes:
### 1. Methodology Comparison
#### 1.1 General Comparison with Mainstream Methods
To provide a clear methodology comparison, we have summarized the mainstream methods alongside our own in the table below. The definitions of constraints (a-c) used here are consistent with those in the paper:
| Method | SPOT-RNA [1] | E2Efold [2] | UFold [3] | RFold |
| --- | --- | --- | --- | --- |
| Pre-Processing | Pairwise concat | Pairwise concat | Pairwise concat + implicit matching [4] | Seq2map |
| Backbone model | CNN + LSTM | Transformer | U-Net | U-Net |
| Post-Processing | Sigmoid($\cdot$)| Unrolled Algorithm | Unrolled Algorithm | Decoupled Optimization |
| Constraint (a) | ✖️ | ✔️ | ✔️ | ✔️ |
| Constraint (b) | ✖️ | ✔️ | ✔️ | ✔️ |
| Constraint (c) | ✖️ | ✖️ | ✖️ | ✔️ |
For the **pre-processing**, RFold is the unique approach that does not require explicit hand-crafted features produced by pairwise concat or implicit matching.
For the **post-processing**, RFold propose a novel decoupled optimization to satisfy all constraints. While previous prevalent approaches such as E2EFold and UFold can only approximate, *they cannot guarantee satisfaction of constraint (c)*.
#### 1.2 Detailed Comparison with UFold
As suggested by Reviewer ZyLM and G61p, we delineate the key similarities and differences between RFold and UFold below:
*Similarity*:
Both RFold and UFold use the U-Net architecture as the backbone model for RNA secondary structure prediction.
*Dissimilarity*:
(1) UFold requires preprocessing of the RNA sequence into hand-crafted features, whereas RFold does not. Specifically, the input of UFold's U-Net is represented as a $17 \times L \times L$ matrix for a given RNA sequence of length $L$, while the input of RFold's U-Net is the feature map of $1 \times L \times L$ from the Seq2map Attention.
(2) UFold employs the same post-processing strategy as E2Efold [3], which utilizes an unrolled algorithm. This approach, however, does not guarantee satisfaction of the three important constraints. In contrast, RFold employs a decoupled optimization approach to satisfy these constraints in a simple and efficient manner.
### 2. Evaluation with Pseudoknots
As recommended by Reviewers RVaK and G61p, we present the evaluation of pseudoknot structure prediction. We pick all sequences containing pseudoknots from the RNAStralign test dataset. The results are as follows:
| Method | Precision | Recall | F1 score |
| ---------------- | --------- | ------ | -------- |
| HotKnots | 0.500 | 0.565 | 0.531 |
| RNAstructure [5] | 0.778 | 0.761 | 0.769 |
| NuPack [6] | 0.724 | 0.933 | 0.815 |
| SPOT-RNA [1] | 0.677 | 0.978 | 0.800 |
| E2Efold [2] | 0.844 | 0.990 | 0.911 |
| UFold [3] | 0.962 | 0.990 | 0.976 |
| RFold | **0.971** | **0.993** | **0.982** |
As can be seen, RFold consistently outperforms UFold across all three metrics, underscoring the efficacy of the proposed methodology.
### 3. Evaluation with Cross-family RNA
As suggested by Reviewer RVaK, we have conducted an evaluation using cross-family RNA from the bpRNA-new dataset. It's noteworthy that the standard UFold method attains an F1 score of 0.583, while our approach reaches 0.616. With the same data augmentation technique based on Contrafold [8], UFold's performance increases to 0.636, whereas our RFold method achieves 0.651. This F1 score positions RFold second only to the thermodynamics-based method, Contrafold.
| Method | Precision | Recall | F1 score |
| --- | --- | --- | --- |
| E2Efold | 0.047 | 0.031 | 0.036 |
| SPOT-RNA | 0.635 | 0.641 | 0.620 |
| Contrafold | 0.620 | 0.736 | **0.661** |
| UFold | 0.500 | 0.736 | 0.583 |
| UFold + augmentation | 0.570 | 0.742 | 0.636|
| RFold | 0.614 | 0.619 | 0.616 |
| RFold + augmentation | 0.618 | 0.687 | ***0.651***|
### 4. Dataset Split
In this work, we did not create any new datasets. Since the data and splits from previous studies are publicly available, we strictly adhered to their data and splits for our experiments.
### 5. Reproducibility
We should have open-sourced the inference code along with the pre-trained weights and provided a Colab demo for easy reproduction of all the experiments in this manuscript. However, in accordance with NeurIPS regulations, we are unable to share the links here.
We thank Reviewer RVaK for the great suggestion, and we will incorporate the establishment of a web server into our plan.
### 6. Constraints and Generalization Ability
The stringent constraints imposed by our model may result in a lower recall metric, suggesting that some uncertain pairs might be ignored due to these strict rules.
We introduce a more flexible solution, soft-RFold, detailed in Appendix B. By incorporating a checking mechanism and adjusting the confidence threshold, soft-RFold is capable of enhancing the recall metric. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance | Accept (poster) | Summary: This paper has two contributions:
(1) MoDo algorithm which is a variant of MGDA with a double sampling to obtain an unbiased stochastic estimate of the gradient problem.
(2) A solid theoretically analysis on the error of multi-objective optimization.
Strengths: This paper has a very detailed analysis on the optimization error and generalization error of multi-objective optimization.
Weaknesses: In the experiments, authors only compare MoDo with MGDA, but there are many other algorithms, like CAGrad, GradNorm, Uncertainty Weight. This baseline is not enough to demonstrate the effectiveness of the proposed method.
Overall, MoDo is better than MGDA. However, in two objectives out of three (table 2), MGDA is better than MoDo, also not convincing enough on the effectiveness of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you summarize and simply explain the advantages of MoDo over MGDA here, smaller error or more efficient?
Why MoDo has an unbiased gradient estimation? line 142
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors only discuss the situation of using Pareto optimal to get the aggregated gradient.
However, there are other methods other than Pareto to balance multi-objective optimization like balancing gradient magnitudes like AdaTask, GradNorm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for acknowledging the strengths of our work. Our point-to-point response to your comments and suggestions follows next.
>**W1.** In the experiments, authors only compare MoDo with MGDA, but there are many other algorithms, like CAGrad, GradNorm, Uncertainty Weight. This baseline is not enough to demonstrate the effectiveness of the proposed method.
See response to **General Response-Q2**.
>**W2.** Overall, MoDo is better than MGDA. However, in two objectives out of three (table 2), MGDA is better than MoDo, but also not convincing enough on the effectiveness of the proposed method.
1. This may be a **misinterpretation** of the results for the following two reasons.
This phenomenon is mainly due to trade-offs among different tasks. In the simulation, the loss values of DSLR and Webcam are smaller than that of Amazon. Therefore, MGDA tends to favor these tasks compared to Amazon, leading to significantly worse performance of MGDA on Amazon. This calls for a more holistic measure -- the average performance degradation of the method compared to single-task learners (last column of Table 2). It can be seen that MoDo clearly outperforms MGDA with regard to this measure. Thus, we believe that the bias of MGDA towards some tasks compared to MoDo in Table 2 does not indicate a lack of effectiveness of MoDo, but rather a limitation of MGDA.
2. More results in **Appendix D.2 and General Response-Q2** demonstrate the effectiveness of MoDo over MGDA.
>**Q1.** Could you summarize and simply explain the advantages of MoDo over MGDA here, smaller error or more efficient?
**MoDo has advantages than MGDA in both smaller optimization error and better efficiency** because:
1) **MoDo is better than the full-batch MGDA in efficiency**, as it does not require computing the full-batch gradients at each iteration but only stochastic estimates of the gradients.
2) **MoDo smaller optimization error theoretically than the vanilla stochastic version of MGDA**, as the latter is not guaranteed to converge to Pareto stationarity as proved in [47, Theorem 1] while MoDo does.
>**Q2.** Why MoDo has an unbiased gradient estimation? line 142
This is because if we take expectation w.r.t. stochastic samples of the update over $\lambda_ t$ in (6a), which is $\mathbb{E}_ {z_ {t,1}, z_ {t,2}}[\nabla F_ {z_ {t,1}}(x_ t)^ \top \nabla F_ {z_ {t,2}}(x_ t)\lambda_ t] = \nabla F_ S(x_ t)^ \top \nabla F_ S(x_ t)\lambda_ t$, equal to the full batch gradient of problem (4b). See also **General Response-Q3**.
>Limitations. Authors only discuss the situation of using Pareto optimal to get the aggregated gradient. However, there are other methods other than Pareto to balance multi-objective optimization like balancing gradient magnitudes like AdaTask, GradNorm.
We will include more discussion of AdaTask, GradNorm [4] in the related works section. See also the new empirical comparison in **General Response-Q2**.
We hope that our responses have addressed your questions. Thank you again!
---
Rebuttal Comment 1.1:
Title: A kind request of your feedback
Comment: Dear Reviewer UWFs,
Thank you very much for your review. While the discussion period has started several days ago, we have not received your feedback on our response. We believe we have addressed all your concerns including:
- Comparison with other baselines
- Advantages of MoDo over MGDA
We kindly request your feedback on whether our response resolves your concerns. Your additional comments would be invaluable to us!
Sincerely, Authors
---
Rebuttal Comment 1.2:
Title: following questions
Comment: Thanks for the reply of the authors!
I feel my original rating is too low but I need to confirm some points:
(1) The motivation of stochastic MOO algorithms? Why we want stochastic MOO? because the true gradient or full gradient is hard to get (as claimed in MoCo)?
I have this question because MGDA and CAGrad has been introduced and worked before these stochastic MOO algorithms, I wonder how did MGDA or CAGrad to get objective gradients? maybe they just use the gradient of a mini-batch? hard to believe MGDA or CAGrad was using a full gradient (the gradient of the whole dataset)?
yeah, this is the github (https://github.com/isl-org/MultiObjectiveOptimization/blob/master/multi_task/train_multi_task.py) of MGDA from the original authors, seems MGDA is using mini-batch gradient
(2) In your "general response", you mentioned "vanilla mini-batch MGDA does not converge", may I ask where does this come from? any proof or empirically results?
note: I am not the author of MGDA, just wonder how does this multi-gradient method really work in practice
---
Reply to Comment 1.2.1:
Title: Response to following questions
Comment: Dear Reviewer UWFs,
Thanks a lot for your prompt reply! Below are our answers.
>**Q1.** The motivation of stochastic MOO algorithms? Why we want stochastic MOO? because the true gradient or full gradient is hard to get (as claimed in MoCo)?
The motivation is that full-batch gradient requires large memory or computation and is impractical to get for large-scale problems.
As the reviewer correctly pointed out, most of the implementations of practical multi-task learning algorithms use stochastic (mini-batch) gradient instead of the true / full gradient.
This can be observed in the github of MGDA you mentioned, and also the github of CAGrad. However, the algorithm design and its analysis work only under the deterministic setting, which generates a significant gap. Note that this gap is not purely theoretical but also of practical relevance as evidenced in your Q2 below.
>**Q2.** In your "general response", you mentioned "vanilla mini-batch MGDA does not converge", may I ask where does this come from? any proof or empirically results? note: I am not the author of MGDA, just wonder how does this multi-gradient method really work in practice
Yes, this has been proved in recent works [47, Theorem 1] and [25, Section 4], which states that "There is a stochastic convex optimization problem for which MGDA, PCGrad, CAGrad do not converge to the Pareto optimal solution." Numerical examples to demonstrate this non-convergence phenomenon can be also found in Figure 1 of [47],Figure 2 of [25], and Figure 3 of [8].
However, prior works of MOO optimization convergence analysis (e.g. the analysis provided in CAGrad) analyzed the deterministic version of the algorithms, but implemented the stochastic mini-batch gradient. This motivates a recent line of works that design different stochastic variants of MGDA algorithms [8,25,47], and provide their theoretical optimization convergence analysis.
[25] Suyun Liu and Luis Nunes Vicente. "The Stochastic Multi-gradient Algorithm for Multi-objective Optimization and its Application to Supervised Machine Learning." Annals of Operations Research, 2021.
[47] Shiji Zhou et al. "On the Convergence of Stochastic Multi-Objective Gradient Manipulation and Beyond." NeurIPS 2022
[8] Heshan Fernando et al. "Mitigating gradient bias in multi-objective learning: A provably convergent stochastic approach." ICLR 2023
---
Thank you very much for engaging in the discussion! We really appreciate your feedback and hope our answer resolves your questions. We would be happy to answer your following questions if there are any. | Summary: This work considers the multi-objective learning problem. The classic idea of dynamic weighting in MOL is to take gradients from each objective and to weight them using a fixed procedure to avoid conflicts between different objectives. Empirically, however, there often seems to be performance degradation when using these methods. They discover that this is due to a tradeoff between optimization, generalization, and conflict avoidance. They propose a new algorithm, MoDo, that interpolates between static weighting and dynamic weighting and find parameters that can control this tradeoff effectively.
Strengths: Overall, this is a strong work that provides a lot of insight into dynamic weighting in multi-objective optimization. The highlights are as follows:
- The paper is very intuitively written and easy to follow. Specifically, the three-way tradeoff is clear both intuitively and quantitatively. In addition, Figures 1 and 2 are very well done and extremely insightful.
- The proposed MoDo algorithm is very simple and intuitive while brilliantly highlighting the three-way tradeoff inherent in dynamic weighting algorithms. I found the remark about being in the early stopping regime for generalization error to diminish (T = o(n)) interesting, and it is also empirically highlighted later in the work.
- The tradeoffs for both strongly convex and nonconvex cases are analyzed
- The findings in this work translate to a practical explanation of the behavior of dynamic weighting algorithms
Weaknesses: - While MoDo is a great theoretically inspired algorithm that controls the aforementioned tradeoffs, I am not sure how well it performs empirically. While there is an empirical result, the tasks seem rather simple. In addition, as Table 2 shows, MoDo does not consistently outperform MGDA. In practice, I feel handling the conflict avoidance tradeoff shouldn't matter much as long as the accuracy for the task is good. Therefore, I am not sure how applicable the algorithm would be in practical situations.
- As the authors mention in their limitations section, the analysis is only about one specific algorithm in one specific setting, which makes it unclear how general the principles in this work would apply when the assumptions are relaxed. I feel this given this work is meant to analyze a practical phenomenon, this is a substantial weakness, as it is unlikely that Assumptions 1-3 are all true in real-world settings (such as the image classification setting the authors test on)
- There is no intuition of the proof in the main text, and while this is common in optimization papers, it would be useful to learn about the key insights towards the proof.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Can you compare the theoretical tradeoffs in Table 1 with the empirical findings in Figure 4? How well do they match with each other?
- How can knowledge of this three-way tradeoff be used to design better algorithms in multi-task learning in the future?
- How can we tune the parameters in practice? How sensitive is the algorithm to the parameters?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Great!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for recognizing our work as a strong one! We will respond to the weaknesses and questions point by point as follows.
> **W1.** Empirical benefit of MoDo.
We have more results in the submitted **Appendix D.2** to demonstrate its better performance on other datasets. Also see **General Response-Q2** for more results.
> **W2.** Strong assumptions compared to practical phenomenon.
It is a **misunderstanding** that our theory requires ''Assumptions 1-3 are all true''. In fact, we only generally require smoothness of the functions (Assumption 1), and provide separate discussions for nonconvex case (under Assumptions 1 and 3), and the discussion for strongly convex case (under Assumptions 1 and 2) for both optimization and generalization.
**These assumptions are standard in prior optimization analysis [8,25,47].**
The general smooth non-convex case covers a lot of practical problems such as using neural networks with ELU activation functions. And the strongly convex smooth case covers, e.g., the linear fine-tuning problem with l2-norm regularizer.
> **W3.** No intuition of the proof in the main text.
Due to space limitation, we defer the intuition of the proof to the Appendix in each section.
The key insights of the proof are summarized as follows.
1. For **generalization**, the key is to use the error of the algorithm output resulting from perturbation of training data (stability) to bound the expected difference of performance on the testing and training data (generalization). Then using the property of stochastic sampling and the update function, the expected error of the current iteration caused by the dataset perturbation can be bounded recursively by a linear function of that of the previous iteration.
See also **Summary of Theoretical Contributions-T2**.
2. For **optimization**, the key is to use the descent Lemma derived from smoothness of the functions, which shows the function value approximately decreases after each update, with an approximation error caused by the dynamically changing weight. Then we are able to bound this approximation error using the property of the update of the dynamic weight based on the convexity of the subproblem.
See also **Summary of Theoretical Contributions-T3**.
> **Q1.** Comparison of Table 1 and Figure 4.
Yes, we have discussions and comparisons in Section 5.2; see line 313-329 in the main text. Since Figure 4 corresponds to the general nonconvex case, it matches the result described in Table 1, line 1 for the nonconvex (NC) case.
> **Q2.** How can knowledge of this three-way tradeoff be used to design better algorithms in multi-task learning in the future?
As discussed in Section 3.3, Appendix D.1.2 in our paper, our theory (Theorems 1,2) suggests that, when chosing dynamic weighting at each iteration, it is better that **the drift of the dynamic weights is not large across iterations**, to ensure better test performance.
If the dynamic weights are iteratively updated, this can ensured by
- having appropriately small learning rate for the dynamic weights;
- do not update dynamic weight at each iteration, but rather update the dynamic weight only when there are conflicts in gradients.
> **Q3.** How can we tune the parameters in practice? How sensitive is the algorithm to the parameters?
We have included hyperparameter choice and sensitivity analysis in **Appendix D**. In practice, $T$ should not be too large to ensure good generalization performance. $\alpha, \gamma$ depend on the choice of $T$. For example, $\alpha = \mathcal{O} (T^{-\frac{1}{2}})$ works well, and a relatively small $\gamma$ works well in practice. Furthermore, overall performance of the algorithm is mostly sensitive to iterate step size $\alpha$, while individual task-trade offs are mostly sensitive to dynamic weighting step size $\gamma$.
---
We hope that our responses to your comments are satisfactory. Thank you again!
---
Rebuttal Comment 1.1:
Comment: Thanks for the comments and clarification. Previously I was on the 5-6 borderline, but after the clarifications I still maintain that 6 is a better choice, as I still believe this work is technically solid and would have moderate impact. My main concern is still that it is not clear how these insights can be applied for future advances in multi-objective optimization.
---
Reply to Comment 1.1.1:
Title: Response to the concern of how to apply the insights in this paper
Comment: ## How these insights can be applied for future advances in multi-objective optimization
Thank you very much for your quick response.
It is an excellent suggestion, and in fact our ongoing work to apply the insights in this work for future advances in multi-objective optimization (MOO).
Here's a breakdown of how the insights can be applied in each of the areas.
**1. Theoretical Applications:**
- **Analyzing other MOO algorithms.** This cannot be achieved by simply combine prior works solely on optimization and generalization analysis because they are often focusing on different settings. Applying this theoretical framework, we could also analyze the three types of errors for other dynamic weighting MOO algorithms such as MoCo, CAGrad, and PCGrad.
Specifically, for generalization, Propositions 2, 3 still hold. Since the stochastic implementations of PCGrad and CAGrad are sampling determined (Definition 3), the bound in the NC case (Theorem 1) holds. Combining the optimization and generalization error bounds, we can find better hyperparameters in PCGrad and CAGrad to minimize the test risks.
- **Studying benefits of MOO algorithms over static weighting.**
Prior theoretical works of MOO mostly focus on optimization convergence to Pareto stationarity. However, this can also be guaranteed by static weighting. Therefore, the theoretical benefits of MOO algorithms over static weighting remain open. Our analysis addresses this critical question by demonstrating the advantages of MOO algorithms in CA distance reduction. This provides a justification for their use in cases where CA distance reduction is crucial.
**2. Practical Applications:**
- **Hyperparameter choice:**
Our theory is suitable for analyzing the effect of hyperparameters such as step size and number of iterations on the three errors, and the total testing risk. This allows us to find better hyperparameters to minimize the test risks. For example, when performing grid search of hyperparameters for MoDo, we focus on a range with relatively small $\gamma$.
- **Algorithm choice:**
Comparing error bounds of different algorithms allows for a more informed algorithm selection process based on the nature of the problem. The choice of algorithm depends on which error dominates the performance.
For example, if in a problem the CA distance is a major factor that prevents the algorithm to achieve good performance, then we choose to use the algorithm with the smallest CA distance. And if the generalization error dominates, we could choose e.g., MoDo with a small step size $\gamma$ or static weighting to obtain better testing performance.
- **Algorithm design:**
Our theory could inspire the development of new MOO algorithms that strike a better balance between among the three types of errors.
For example, it suggests large drift of $\lambda$ could degrade the test error. Therefore, future algorithms could be designed to update $\lambda$ not at each iteration, but only when there is significant conflict in gradients, e.g., the angle between two gradients is larger than 90 degrees. This could potentially improve both the test error and the efficiency of the MOO algorithm.
---
Following your excellent suggestion, we will also incorporate this in our discussion of future work in the revision. | Summary: This paper studies three-way trade-off in multi-objective learning: 1) optimization error caused by sampling and stochastic training; 2) generalization error that measures the difference between source and target sets; 3) conflict-avoidance direction error that is the bias between the calculated direction and the right one. The authors propose the MoDo algorithm to optimize this trade-off in one algorithm. Plenty of theoretical analysis supports the claims.
Strengths: 1. This paper considers the multi-objective algorithmic design in a bigger picture by compositing three-way trade-off together, and provides solid analysis to solve it.
2. This paper is well written. In particular, Table 1 and Figure 2 help to understand the contribution better.
Weaknesses: 1. The proposed algorithm needs to compute three batches of gradients to run an iteration, which is much less computationally efficient.
2. Benchmarks are too limited that only contain static and MGDA (2018 proposed), experiments have not compared with other typical methods like PCGrad and CAGrad, which have better empirical performance.
3. Several claims are out-of-date. In 34-35 "Unfortunately, the reason behind this empirical performance
degradation is not fully understood and remains an open question" is not accurate, since [1] has proved that the vanilla MGDA algorithm and also PCGrad and CAGrad will not converge to Pareto optimal. In 268 "This can be overcome by increasing the batch size during optimization [25]" is wrong, because the assumption on Lipschitzness for $\lambda$ with respect to gradients has been proved to be wrong (Proposition 2 in [1]). Also, it can be proved that even a stochastic error of the gradient can cause a significant bias in the direction, so only increasing the batch size does not work.
[1] S Zhou, W Zhang, J Jiang, W Zhong, J Gu, W Zhu - Advances in Neural Information Processing Systems, 2022
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Why use double sampling? Can MoCo completely solve the problem? Or is it possible to reduce the sample complexity?
2. Does the proposed theory can help improve PCGrad and CAGrad?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Sample efficiency of the proposed algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for acknowledging the strengths of our work. Our point-to-point response to your comments follows next.
> **W1 & Q1-3 & Limitation.** MoDo has limitation on sample efficiency. Is it possible to reduce improve?
- Theoretically, MoDo is not necessarily worse on total sample complexity; see our **General Response-Q1**.
- Practically, MoDo does not require more samples, because the batch size can be controlled to keep the sample size per iteration $n'$ the same.
E.g., for the experiments in **Appendix D and General Response-Q2**, MoDo generally performs better than the baselines under the same $T\times n'$.
In short, **MoDo could have better total sample complexity theoretically and practically**. While there may be room for improvement, it is beyond the scope of this paper.
>**W2.** Limited benchmarks.
See more results in **General Response-Q2**.
>**W3.** Outdated claims. ([1] is replaced with [47] as indexed in our submission)
>**W3-1.** "the reason ... is not fully understood ..." is not accurate. [47] proved...
We agree **[47] partially addresses this from the optimization perspective**.
However, "empirical performance degradation" in this context refers to the observed phenomenon of **test performance** being often worse than static weighting [15,40]. The reasons for this degradation are not fully understood theoretically, as prior works, such as [47], have focused solely on analyzing the theoretical training (optimization) error, NOT the generalization (test) error.
In fact, it has been demonstrated in [40, Figure 2] that the training (optimization) errors are all relatively small, while the generalization performances differ. Therefore, **only analyzing the optimization error is not enough to fully understand the test performance**. Nevertheless, we will follow your great suggestion to desribe related works more precisely and acknowledge the pioneering contribution of [47] in optimization.
>**W3-2.** "...can be overcome by increasing the batch size[25]..." is wrong, because the assumption...in [25] is wrong...
This is a **misunderstanding**. We intend to convey that "the bias in the CA direction can be mitigated by increasing the batch size". We agree the assumption on Lipschitz $\lambda^*(x)$ in [25] is wrong. But **this is a limitation of their proofs, NOT this claim**. In fact, for $Q\in \mathbb{R}^{d\times M}$, although $\lambda^*(Q)\in \arg\min_{\lambda\in \Delta^M}||Q\lambda||^2$ is not Lipschitz continuous w.r.t. $Q$, it can be proved that the update direction is $\frac{1}{2}$-Holder continuous w.r.t. $Q$ as stated below.
$$||Q\lambda^*(Q)-Q'\lambda^*(Q')||^ 2
\leq 4\max (||Q||,||Q'||)||Q - Q'||.$$
Plugging in $Q = \nabla F_S(x)$, $Q' = \nabla F_Z(x)$ with $Z$ being a stochastic batch of $S$, we have
$$
||\nabla F_S(x)\lambda^*_S(x)- \mathbb{E}_Z[\nabla F_Z(x)\lambda^*_Z(x)]||^2
\leq\mathbb{E}_Z||\nabla F_S(x)\lambda^*_S(x) - \nabla F_Z(x)\lambda^*_Z(x)||^2
\leq 4 \ell_F \mathbb{E}_Z||\nabla F_S(x) - \nabla F_Z(x)||=O(1/\sqrt{|Z|}),
$$
which decreases as the batch size $|Z|$ increases. Therefore, the bias in the CA direction can be reduced by increasing the batch size.
We will follow your great suggestion to revise it as "One challenge ... is the bias in the CA direction, which can be mitigated by increasing the batch size ...", and **acknowledge [47] is the first work to prove the assumption is wrong in prior work [25], and propose a bias reduction scheme to address this issue**.
>**Q1-1.** Why double sampling?
It is one way to mitigate the gradient bias without the momentum-based methods. See **General Response-Q3**.
> **Q1-2.** Can MoCo solve the problem?
If the "problem" is to reduce bias in optimization, MoCo can solve it. But the "problem" we study in this paper, the theoretical test risk, is **different**. MoCo **CANNOT completely solve** the test problem since it only provides guarantee for optimization error but not for generalization error. Even for optimization, MoDo has improved sample complexity; see **General Response-Q1**.
>**Q2.** Does the proposed theory help improve PCGrad and CAGrad?
Great suggestion! We summarize 2 ways to improve PCGrad and CAGrad based on our theory.
**1. Controlling the change of $\lambda_t$ can improve generalization and achieve the best trade-off.**
Our theory suggests, while dynamic weighting enables conflict avoidance, large drift of weights during update may degrade generalization. Thus, controlling this drift in PCGrad and CAGrad could improve their performance.
**2. The theoretical framework for MOL is general, and can be used for other dynamic weighting methods including PCGrad and CAGrad.**
E.g., for generalization, Propositions 2, 3 still hold. Since the stochastic implementations of PCGrad and CAGrad are sampling determined (Definition 3), the bound in the NC case (Theorem 1) holds. Combining the optimization and generalization error bounds, we can find better hyperparameters in PCGrad and CAGrad to minimize the test risks.
---
We hope this can address your questions and raise the score. Thank you again!
---
Rebuttal Comment 1.1:
Title: A kind request of your feedback
Comment: Dear Reviewer LuAm,
Thank you very much for your review. While the discussion period has started several days ago, we have not received your feedback on our response. We believe we have addressed all your concerns including:
- Comparison with other baselines
- Sample efficiency of MoDo
- More precise and detailed discussion and comparison of the pioneering contribution of [47] in stochastic multi-objective optimization
We kindly request your feedback on whether our response resolves your concerns. Your additional comments would be invaluable to us!
Sincerely, Authors
---
Rebuttal Comment 1.2:
Title: Thanks for the rebuttal.
Comment: Although the review did not engage, I'll carefully read and consider it during the decision period.
AC
---
Reply to Comment 1.2.1:
Title: Thanks for the assistance
Comment: Dear Area Chair,
Thank you very much for your efforts in the review process. We would be happy to provide any additional information if asked.
Sincerely, Authors | Summary: This paper studies the multi-objective optimization, and in particular, focus on the generalization and stability analysis. By decomposing the Pareto stationarity error into the generalization and optimization error, the authors then analyze and upper-bound these two errors respectively. The distance to the conflict-avoidant direction, which optimizes al objectives jointly, is also analyzed. A stochastic variant of MGDA named MoDo has been developed and analyzed. Based on the theoretical derivations, the authors find a three-way tradeoff among the optimization error, generalization error and the CA error. Some implications on the parameter selection and illustrations are provided.
Strengths: 1. The studied topic on multi-objective optimization has received increasing attention thanks to the important applications like multi-task learning. Studying the generalization, stability and the tradeoff is important and under-explored.
2. Although generalization has been studied for multi-objective optimization, stability has not been explored. This work seems to be the first one to fill this gap.
3. The tradeoff among optimization and generalization is not surprising, but their tradeoffs with the CA direction error seem to be new given the multi-objective structure.
Weaknesses: 1. How is (4a) equivalent to (4b)? Can the authors provide some more details?
2. Before (4a), it says that CA direction maximize the minimum descent of all objectives. But (4a) seems to say that the direction minimizes something.
3. The upper bounds in the nonconvex case may not be very tight given the large exponential dependence, and hence may not be able to exactly capture the generalization and tradeoff behaviors. Is it possible to provide a lower bound in this case? Or what challenges in getting a tight bound?
4. The analysis uses Frobenius norm, which may be large in practice. Instead, spectrum norm may be more proper. Can the authors comment on why use Frobenius norm rather than spectrum norm?
Overall, I appreciate the studied problem and the analysis in this work, but I am also open to other reviewers’ comments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See Weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for appreciating our problem setup and analysis! We will respond to the weaknesses and questions point by point as follows.
> **W1. & W2.** Derivations of (4a) and (4b).
This is a **standard derivation for the MGDA algorithm**. Similar derivations are provided in [25, Section 3.1], [47, Section 2.2].
Here we also include a derivation as follows. For (4a), the original problem in MOO is to maximize the minimum descent (among all objectives) along the update direction $d$, where the minimum descent given $d$ can be computed by
$$\frac{1}{\alpha}\min_{m\in [M]} \{f_{S,m}(x) - f_{S,m}(x+\alpha d)\} \approx \min_{m\in [M]} -\langle \nabla f_{S,m}(x), d\rangle
.$$
Then with regularization on $d$ to control its norm, the problem is formulated as
$$\max_{d \in \mathbb{R}^d} \min_{m\in [M]} -\langle \nabla f_{S,m}(x), d\rangle - \frac{1}{2}||d||^2 $$
which is equivalent to (4a) as
$$\min_{d \in \mathbb{R}^d} \max_{m\in [M]} \langle \nabla f_{S,m}(x), d\rangle + \frac{1}{2}||d||^2 .$$
Then (4a) can be reformulated as
\begin{align}
\min_ {d \in \mathbb{R}^d} \max_ {m\in [M]} \langle \nabla f_ {S,m}(x), d\rangle + \frac{1}{2}||d||^2
& =\min_ {d \in \mathbb{R}^d} \max_ {\lambda \in \Delta^M} \langle \nabla F_ {S}(x)\lambda, d\rangle + \frac{1}{2}||d||^2 \\\\
&= \max_ {\lambda \in \Delta^M} \min_ {d \in \mathbb{R}^d} \langle \nabla F_ {S}(x)\lambda, d\rangle + \frac{1}{2}||d||^2
\end{align}
where we use the min-max theorem to change the order of min and max operators in the last equation.
In this way, given $\lambda$, the optimal $d^*(x,\lambda)$ is obtained as $d^*(x,\lambda) = -\nabla F_S(x)\lambda$, and the optimal $\lambda^* \in \arg\min_{\lambda\in \Delta^M} ||\nabla F_S(x)\lambda||^2$. Combining these leads to the formulation (4b).
>**W3.** The upper bounds in the nonconvex case may not be very tight given the large exponential dependence, and hence may not be able to exactly capture the generalization and tradeoff behaviors. Is it possible to provide a lower bound in this case? Or what challenges in getting a tight bound?
This might be a **misunderstanding** of our theorem. Note that **we do NOT have exponential dependence** on the hyperparameters $T$ or $n$, but rather, our general bound is in the order of $\mathcal{O}(T/n)$ in the NC case (see Theorem 1). Therefore, our bound is already tighter compared to the one for single objective learning in [12], which could lead to exponential dependence on $T$ without $1/t$ step size decay even for static weighting.
We overcome the exponential dependence by bounding the probability of selecting the perturbed data in all $T$ iterations and the resulting error, instead of bounding the accumulated expected error in the output parameters through recursion over each iteration as did in [12].
The latter could result in exponential dependence on $T$ in the NC case, since the expected error at each iteration increases over the previous one in a rate larger than 1, due to the expansiveness of the update function.
>**W4.** The analysis uses Frobenius norm, which may be large in practice. Instead, spectrum norm may be more proper. Can the authors comment on why use Frobenius norm rather than spectrum norm?
Thank you for the suggestion. There are two main reasons for using Frobenius norm.
**1. Frobenius norm of the gradient matrix, $||\nabla F_ S(x)||_ F$ can be directly derived from the $\ell$-2 norm of each gradient $||\nabla f_ {S,m}(x)||$.**
Therefore, the $\ell_ F$-Lipschitz continuity of $F_ z(x)$ in Frobenius norm (Assumptions 1) can be direcly derived from the $\ell_ f$-Lipschitz continuity of $f_ {z,m}(x)$ for all $m \in [M]$ with $\ell_ {F} = \sqrt{M}\ell_ {f}$, as discussed in Lemma 1. The latter assumption of $\ell_ f$-Lipschitz continuity of $f_ {z,m}(x)$ for all $m \in [M]$ were standard in prior works such as CR-MOGM [47], MoCo [8] for optimzation analysis.
**2. The spectral norm and Frobenius norm can be used interchangeably up to a factor since for $A\in\mathbb{R}^{d\times M}$, $||A||\leq ||A ||_ F \leq \sqrt{r}||A ||$ where $r\leq M<<d$ is the rank of $A$.**
In our analysis, $A$ is the gradient matrix $\nabla F_ S(x)$, and $M$, the number of objectives, is a fixed constant and relatively small compared to other factors.
Nevertheless, it would interesting to derive our theory under the spectral norm or other matrix norms given appropriate assumptions.
---
We hope that our responses to your comments are satisfactory. Thank you again!
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the reviewer for the detailed response. Based on the global response, could I ask two more questions?
1. Can you elaborate how MoDo achieves the $\epsilon^{-2}$ sample complexity from Theorem 3? Why is there a huge improvement over the SOTA MoCo without bounded function assumption?
2. The Frobenius norm may be still a little bit large. Can the authors show the dependence on the number of tasks in the bounds since it may matter in the MTL?
---
Reply to Comment 1.1.1:
Title: Response to additional questions
Comment: Thank you very much for engaging in the discussion! The answers to your questions are as follows.
>**Q1-1.** Can you elaborate how MoDo achieves ${\cal O}(\epsilon^{-2})$ sample complexity from Theorem 3?
For a fair comparison with prior works such as MoCo, we refer to the result provided in Appendix C.3 in page 39, Eq.(183) without taking the square root, which states the optimization error is bounded as
$$\frac{1}{T}\sum_{t=1}^T
\mathbb{E}_ A || \nabla F_ S(x_ t) \lambda_ t^ *(x_ t) ||^2
= O(\frac{1}{\alpha T} + \alpha + \gamma)$$
where choosing $\alpha = \Theta(T^ {-\frac{1}{2}}), \gamma= \Theta(T^ {-\frac{1}{2}})$, we can obtain the optimal rate
$$\frac{1}{T}\sum_{t=1}^T
\mathbb{E}_ A || \nabla F_ S(x_ t) \lambda_ t^ *(x_ t) ||^2
= O(T^ {-\frac{1}{2}})$$
which implies $O(\epsilon^ {-2})$ sample complexity.
>**Q1-2.** Why is there a huge improvement over the SOTA MoCo without bounded function assumption?
This is mainly due to the difference of the MoDo algorithm with double / independent sampling, and the difference in our proof techniques compared to those in MoCo. Some key ideas of the proof techniques are summarized in **Summary of Theoretical Contributions-T3**.
To be more specific, in our proof we adopt the following steps:
**S1.** We first use the descent lemma for $F_ S(x)\lambda$ using a fixed $\lambda$, see Eq.(176) in our Appendix. This allows cancellation of the function values when taking the telescope sum, which cannot be achieved using a dynamic $\lambda_ t$. As a result, the bounded function value assumption needs to be introduced in prior works if using a dynamic $\lambda_ t$ for the descent lemma, see e.g., Eq.(75) in the MoCo paper [8].
**S2.** Then, the inner product term $\mathbb{E}_ A\langle\nabla F_ S(x_ t) \lambda, \nabla F_ S(x_ t) \lambda_ {t+1}\rangle$ in the inequality Eq.(177) is related with the gradient norm $\mathbb{E}_ A||\nabla F_ S(x_ t) \lambda_ t||^2$ by the property from the update of $\lambda_ t$ in Lemma 16, Eq.(155), which is a nice property with bias reduction due to double sampling in the subproblem.
**S3.** Finally, taking telescope sum, the optimization error measured in gradient norm $\mathbb{E}_ A||\nabla F_ S(x_ t) \lambda_ t||^2$ can be bounded without introducing additional assumptions.
Comparing the result without the bounded function value assumption in MoCo, Theorem 2, the inner product term in their Eq.(70), $\langle\nabla f_ m(x_ k), \nabla F(x_ k) \lambda_ k^ *-Y_ k \lambda_ k\rangle$, is bounded by Cauchy-Schwartz inequality, and then by triangle inequality, which boils down to bounding the moving average gradient error $||Y_ k - \nabla F(x_ k)||$, the dynamic weight approximation error $||\lambda_ k -\lambda_ {\rho,k}^ *||$, etc. This may not be as tight as our **S2** directly using the property in Lemma 16 from double sampling in the subproblem.
>**Q2.** The Frobenius norm may be still a little bit large. Can the authors show the dependence on the number of tasks in the bounds since it may matter in the MTL?
Sure, thanks for the insightful suggestion! We have restated below the bounds with explicit dependence on the number of tasks $M$.
**Theorem 1:** $\epsilon_F^2 = O\Big(\frac{MT}{n}\Big)$, $\mathbb{E}_ {A,S}[R_ {\rm gen}(A(S))] = O(M^ {\frac{1}{2}} T^ {\frac{1}{2}} n^ {-\frac{1}{2}} )$, since $G^2 = O(M)$.
**Theorem 2:** $\epsilon_F^2 = O\Big(\frac{M}{n}(\alpha + \frac{M}{n} + M\gamma)\Big)$, and $\mathbb{E}_ {A,S}[R_{\rm gen}(A(S))] = O(M^{\frac{1}{2}} n^ {-\frac{1}{2}} )$, when $M\leq n$ and $M\gamma \leq 1$.
**Theorem 3 (without square root):**
$\frac{1}{T}\sum_{t=1}^T
\mathbb{E}_A || \nabla F_S(x_t) \lambda_t^*(x_t) ||^2
= O(\frac{1}{\alpha T} + \alpha + M^{\frac{3}{2}}\gamma)$
where $\gamma$ can be controlled to ensure $M^{\frac{3}{2}}\gamma$ to be small.
We will also include discussion of the dependence on $M$ in our revision, and further improve the bound using other matrix norms in our future work. | Rebuttal 1:
Rebuttal: ## General Response
We appreciate the reviewers' constructive comments. All reviewers agree the paper has made solid theoretical contributions, and it has "a bigger picture" concerning three types of errors -- optimization, generalization, and CA distance unique in MOL in a holistic framework. It also provides the first algorithm stability-based generalization bound for MOL.
Despite this, we want to re-emphasize that **our standpoint is not a new algorithm that improves over SOTA MOL algorithms**. Instead, we propose a framework for theoretical comparisons of stochastic MGDA and static weighting, complementing to the recent empirical studies [15,40] in NeurIPS 2022; see our **"Summary of Theoretical Contributions"** in the separate thread below.
The reason we use **MoDo as a stochastic variant of MGDA** is that it
- is the simplest possible unbiased stochastic variant of MGDA (as vanilla mini-batch MGDA does not converge) which has theoretical guarantees (with better rates or relaxed assumptions than [8,26,47]);
- interpolates between static and dynamic weighting, thus flexibly controlling the trade-off, which cannot be achieved by existing methods [24,25,36,44].
Below we address three major questions from the initial reviews.
>**Q1. The benefit of MoDo in sample efficiency or error. (Reviewers LuAm, UWFs)**
Regarding the benefit, **MoDo does have benefit in terms of theoretical guarantee of sample efficiency and test risk**, which we explain below.
1. **In terms of sample efficiency, the overall sample complexity of MoDo could be better than other MOO variants.** Although MoDo requires computing 3 independent stochastic gradients at each iteration, it does not necessarily harm the total sample efficiency as questioned by Reviewer LuAm. This is because the **convergence rate of MoDo is faster under the same assumptions**; see the Table below. When the bounded function values assumption is removed, MoDo achieves $\mathcal{O} (\epsilon^{-2})$, significantly improving MoCo with $\mathcal{O}(\epsilon^{-10})$.
The overall sample complexity is:
```
(number of iterations T) X (number of samples per iteration n') = Tn'.
```
Since MoDo improves the number of iterations $T$ to achieve small optimization error, it could improve the overall sample complexity $T\times n'$ when $T$ dominates.
|Algorithm|Assume bounded function values|Sample complexity
|---|---|---|
|CR-MOGM [47, Theorem 3]|Yes|${\cal O} (\epsilon^{-2})$
|MoCo [8, Theorem 2]|No|${\cal O} (\epsilon^{-10})$
|MoCo [8, Theorem 4]|Yes|${\cal O} (\epsilon^{-2})$
|MoDo (Ours, Theorem 3)|No|${\cal O} (\epsilon^{-2})$
2. **In terms of test risk, MoDo has theoretical guarantee of the generalization error, thus the total test risk**, but the generalization error guarantees of MGDA, MoCo and CR-MOGM are unknown or at least not established.
>**Q2. Limited empirical evaluation or recent benchmarks. (Reviewers LuAm, UWFs, L3j5)**
1. **Misunderstanding of our key contributions.** It is worth mentioning again that **we do not claim that MoDo works better than the SOTA algorithms on multi-task learning**. Therefore, it is not our main target to demonstrate MoDo performs better than SOTA algorithms for MTL.
2. **Reasons for current baselines.** We choose static weighting as an important baseline because it has been demonstrated in recent works [15, 40] that on many tasks, static weighting outperforms the recent MOL-based algorithms such as PCGrad and CAGrad.
3. **Other benchmarks as per reviewers' requests.** We compare to PCGrad, CAGrad, and MoCo, etc. following the experimental settings in [arxiv:2111.10603]. Results are summarized in the **attached PDF, Tables 1-3** and will be included in the revision. The results show that MoDo performance is comparable or better to these baselines.
>**Q3. Why use double sampling, how it mitigates gradient bias (Reviewers LuAm, UWFs)**
The **CA distance or gradient bias** can be mitigated since MoDo uses **double/independent sampling** for update. Specifically, taking expectation w.r.t. stochastic samples of the update over $\lambda_t$ in (6a) leads to
$$\mathbb{E}_ {z_ {t,1}, z_ {t,2}}[\nabla F_ {z_ {t,1}}(x_ t)^\top \nabla F_ {z_ {t,2}}(x_ t)\lambda_ t] =
\mathbb{E}_ {z_ {t,1}}[\nabla F_ {z_{t,1}}(x_ t)]^\top \mathbb{E}_ {z_ {t,2}}[\nabla F_ {z_{t,2}}(x_ t)]\lambda_ t = \nabla F_S(x_t)^\top \nabla F_S(x_t)\lambda_t,$$
which is equal to the full batch gradient of problem (4b). This allows us to derive the convergence of distance to CA direction in Lemma 2 using the stochastic optimization framework.
However, if we use the same sample for $z_ {t,1}$ and $z_ {t,2}$, this equality does not hold, i.e.,
$$\mathbb{E}_ z[\nabla F_ z(x_ t)^\top \nabla F_ z(x_ t)\lambda_ t] \neq \nabla F_S(x_ t)^\top \nabla F_ S(x_ t)\lambda_ t.$$
As a consequence, the distance to CA direction cannot be reduced due to this bias.
We would like to make the most use of the interactive discussion function provided by Openreview to clarify any concern the reviewers may have. We look forward to the rolling dicussion and further engagement with the reviewers and area chair!
Sincerely,
Authors
Pdf: /pdf/03cce0184f419739391859ccfeae0f1483752cbb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Generalized Information-theoretic Multi-view Clustering | Accept (poster) | Summary: This paper proposes a new framework for unsupervised multi-view learning based on information bottleneck theory. The paper defines three desiderata for multi-view representation learning in terms of mutual information, namely, comprehensiveness, concentrate, and cross-diversity. The paper further introduces a clustering term to preserve the original data structure and leverages deep neural networks and stochastic gradient variational Bayes to optimize the objective function. The paper evaluates the proposed method on four real-world datasets and shows that it outperforms several state-of-the-art algorithms in terms of clustering performance.
Strengths: • This paper provides a general and principled information-theoretic framework for multi-view clustering that does not rely on strict assumptions about semantic consistency across views.
• It incorporates three requirements for multi-view representation learning that balance the trade-off between informativeness, compression, and diversity of the latent features.
Weaknesses: • The authors should clarify how their definition of comprehensive, concentrative, and cross-diverse multi-view representation differs from the one used by Completer [22], which also maximizes the mutual information between views and minimizes conditional entropy of different views.
• The loss function consists of four terms with different roles: data reconstruction, multi-regularization, information shift, and clustering. The authors should conduct ablation studies to show the contribution and necessity of each term for the proposed method.
• The balance parameters $\beta$ and $\gamma$ control the trade-off in the objective function. The authors should provide some theoretical or empirical guidance for choosing the optimal values of these parameters for different datasets or scenarios.
• As the robustness of the model is important to evaluate, what is the definition of robustness in this paper and the relation between the robustness and the proposed information bottleneck theory?
• In Eq.(4), what is the difference between $\mathbf{Z}^{(v)}$ and $\mathbf{Z}$? How to learn $\mathbf{Z}$?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: The authors are encouraged to address the weaknesses pointed out in the previous section, such as conducting ablation studies. These improvements would make the contribution of the paper more evident and might increase my rating.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our work and constructive comments.
**Q1:** The authors should clarify how their definition of comprehensive, concentrative, and cross-diverse multi-view representation differs from the one used by Completer [22], which also maximizes the mutual information between views and minimizes conditional entropy of different views.
**A1:** Completer maximizes the mutual information between views and minimizes the conditional entropy across view to learn the shared information between views, thus forcing the latent representations of different views to be consistent. The shared information is assumed to contain all task-related information from the data. However, this is not a reasonable assumption in practice, e.g., different views share the same background information but depict different foreground information.
In contrast, the proposed model (IMC) is based on a general assumption that task-related information exists both in the view-shared part and in the view-peculiar part. Under the guidance of the information bottleneck principle, we seek to retain more intrinsic information (comprehensive) and compress each view as much as possible (concentrative), so we need to make each view contain various information (cross-diverse), respectively corresponding to three different mutual information. This is the motivation for proposing IMC, which is different from Completer.
**Q2:** The loss function consists of four terms with different roles: data reconstruction, multi-regularization, information shift, and clustering. The authors should conduct ablation studies to show the contribution and necessity of each term for the proposed method.
**A2:** I'm sorry that I did the ablation experiment and analysis, but I didn't set up a chapter on ablation study in the manuscript, which made it easy to be ignored. We will highlight the ablation study more clearly in the new version.
In line 242 of Section 5.1 Experimental Settings, we construct two variants IMC-v1 and IMC-v2 to compare with IMC, and showed the experimental results in Table 1 and Table 2. The performance of using KL divergence clustering term has been significantly improved, while the use of the information shift term has a small improvement. And in Section 5.2 Performance Analysis, we concluded that end-to-end models help to learn more discriminative representations.
**Q3:** The balance parameters and control the trade-off in the objective function. The authors should provide some theoretical or empirical guidance for choosing the optimal values of these parameters for different datasets or scenarios.
**A3:** You point out one of the main limitation of this approach, we claim in chapter 5.4 that empirical guidance can be based on prior knowledge of the dataset. In our experiences, if there is a large degree of consistency in the semantics of multiple views, then the β is set to small and vice versa. As we mentioned in Chapter 6 there are practical limitations on parameter selection. Theoretical guidance for choosing the optimal parameters is a direction that can be researched in the future.
**Q4:** As the robustness of the model is important to evaluate, what is the definition of robustness in this paper and the relation between the robustness and the proposed information bottleneck theory?
**A4:** We believe that the robustness of the model is its ability to adapt to data generated in various complex scenarios, including noisy data. To evaluate the robustness of the proposed method, we added sparse noise of different scales to the dataset and the clustering results are shown in Table 2. It can be observed that the decline of IMC is the smallest, while the decline of the Completer model is obvious. This may be due to the assumption of multi-view semantic consistency is more sensitive as the noise ratio increases, the information shared by multiple views will also incorporate more noise.
We are looking forward to adding other indicators to measure model robustness. If you have any suggestions on robustness metrics, it will greatly improve the quality of our work.
**Q5:** In Eq.(4), what is the difference between $\textbf{Z}^{(v)}$ and $\textbf{Z}$? How to learn $\textbf{Z}$?
**A5:** $\textbf{Z}$ is the unified latent representation and$\textbf{Z}^{(v)}$ is the latent representation of the $v$-th view. We seek to learn a uniform multi-view representation that satisfies three desiderata, so we introduce the representations for each view. $\textbf{Z}$ can be obtained by the posterior $p(\textbf{z}|\textbf{z}^{(1)}, \textbf{z}^{(2)}, …, \textbf{z}^{(v)})$ which is modeled by a multiple fully connected layer (line 166). Intuitive understanding can be seen in Figure 2(c).
---
Rebuttal Comment 1.1:
Comment: After thoroughly reviewing the feedback from both the fellow reviewer and the author's rebuttal, I am of the opinion that this paper presents a new theory definition and employs a straightforward objective function to achieve cross-view diversity. Given these observations, I am inclined to maintain my current rating.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your recognition and highlighting of the contributions to our work. Your praise is our passion for continuous improvement and motivation to contribute to the field. | Summary: This paper reformulates the multi-view clustering problem from an information-theoretic perspective and propose a general theoretical framework. The authors extend the information bottleneck theory to unsupervised multi-view learning and achieve representation learning and clustering by leveraging deep neural networks and stochastic gradient variational Bayes.
Strengths: 1. This paper combines information theory with multi-view clustering and gives some new definitions to portray some properties of the multi-view domain, which is a very innovative idea that can contribute to the field.
2. This paper provides a solid theory and some key proofs are detailed and complete.
Weaknesses: 1. The datasets used in the experimental part are a bit less and small, and there are many challenging and large datasets in the field of multi-view clustering, the authors should add more experiments to enhance the sufficiency.
2. Parameter analysis part should show parameter changes on all datasets, you may show the figures of all datasets under one metric.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.in line 113, should the $I(Z^{(1)};X^{(2)})$ be $I(Z^{(2)};X^{(2)})$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As the authors state in the summary section, the model is limited by the choice of parameters, and it would be a great improvement if a mathematical strategy could be found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your compliment and constructive suggestion.
**Q1:** The datasets used in the experimental part are a bit less and small, and there are many challenging and large datasets in the field of multi-view clustering, the authors should add more experiments to enhance the sufficiency.
**A1:** Following your suggestion, we further test our model on a large-scale multi-view dataset.
**NUS-WIDE-Object (NUS)** is a dataset for object recognition that consists of 30000 images in 31 classes. We use 5 features provided by the website, i.e. 65 dimension color Histogram (CH), 226 dimension color moments (CM), 145 dimension color correlation (CORR), 74 dimension edge distribution, and 129 wavelet texture.
| Datasets | Metrics | DMVAE | MIB | CMIB-Nets | Completer | IMC-v1 | IMC-v2 | IMC |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| | ACC | 18.24$\pm$0.64 | 15.24$\pm$0.56 | 14.25$\pm$0.35 | 16.88$\pm$0.46 | 17.60$\pm$0.40 | 14.38$\pm$0.36 | $\textbf{19.78} \pm \textbf{0.47}$ |
| NUS | NMI | 19.87$\pm$1.05 | 15.75$\pm$0.42 | 14.65$\pm$0.40 | 17.68$\pm$0.35 | 18.67$\pm$0.62 | 15.55$\pm$0.74 | $\textbf{21.12} \pm \textbf{0.34}$ |
| | ARI | 6.78$\pm$0.81 | 5.83$\pm$0.84 | 4.26$\pm$0.74 | 6.24$\pm$0.64 | 5.68$\pm$0.48 | 5.68$\pm$0.48 | $\textbf{8.32} \pm \textbf{0.41}$ |
|
From the results, it can be seen that NUS is a challenging dataset, and there is still much room for improvement in the clustering performance of all the models The optimal and sub-optimal performance achieved by the proposed IMC and DMVAE model demonstrates that incorporating the clustering loss to bootstrap the representation learning can significantly improve the clustering performance, which can be consistently concluded from the ablation experiments comparing IMC, IMC-v1, and IMC-v2.
**Q2:** Parameter analysis part should show parameter changes on all datasets, you may show the figures of all datasets under one metric.
**A2:** This suggestion is very constructive and I will post the parametric analysis of the NMI indicator for all datasets in the revised version, the results for the ACC and ARI indicators will be presented in the supplementary material.
**Q3:** in line 113, should the $I(\textbf{Z}^{(1)}; \textbf{X}^{(2)})$ be $I(\textbf{Z}^{(2)}; \textbf{X}^{(2)})$?
**A3:** I am sorry for the serious typos, here should be to minimize$I(\textbf{Z}^{(1)}; \textbf{X}^{(1)})$ and $I(\textbf{Z}^{(2)}; \textbf{X}^{(2)})$, for maximum information compression of each view. | Summary: This paper presents an innovative information-theoretic framework for multi-view clustering, which overcomes the limitations of existing methods that rely on strict semantic consistency assumptions. By leveraging deep neural networks, the proposed method achieves more stable and superior clustering performance on several datasets.
Strengths: The idea of incorporating the information bottleneck into multi-view clustering is intriguing and enlightening.
Weaknesses: 1. The importance of Eq. 3 in the proposed method is evident, as it provides the IB-based objective for the clustering approach. However, the paper lacks an explanation of how the original IB objective (Eq. 1) is transformed into the clustering objective, and why it takes the specific form presented. Additionally, if the information bottleneck based clustering is not an original contribution of this paper, it should be properly referenced.
2. The concept of the three desiderata is unclear. From Fig. 1, it appears that the final result of optimizing the three desiderata is to maximize I(Z; X^{1,2}), minimize I(Z; X{1,2}), and maximize I(Z; Z^{1,2}). This optimization seems contradictory, particularly when it aims to maximize the mutual information between Z and view-specific Z^{1,2} while minimizing that between raw data X and Z^{1,2}. Furthermore, the paper lacks an explanation of why these three desiderata are useful for learning cross-view representation, and there are no ablation studies to investigate their influence.
3. The proposed generalized multi-view clustering framework seems to differ from Eq. 3 only in the information shift term. This should be clarified and elaborated upon.
4. The experiments conducted on relatively simple and small-scale datasets limit the persuasiveness of the results. Additionally, the comparison baselines appear outdated, raising concerns about the effectiveness of the proposed method.
5. There are some typos, such as line 112: "I (Z^{(1)}; X^{(2)})."
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See weaknesses.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors have discussed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your thoughtful and detailed feedback.
**A1:** Under the information bottleneck principle, [1] introduced the unsupervised information bottleneck objective Eq.(2), which is essentially the same as $\beta$VAE [2]. By summarizing the deep clustering methods DEC [3] and VaDE [4], we introduce the KL divergence term on the basis of Eq.(2) to discover the cluster structure of the data. The extent to which the clustering structure of the data is maintained while reducing dimensionality is the gap between Eq.(3) and Eq.(2).
[1] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. In *ICLR*, 2017.
[2] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In *ICLR*, 2017.
[3] Xie, Junyuan, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering. In *ICML*, pages 478-487, 2016.
[4] Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In *IJCAI*, 2017.
**A2:** Actually, the optimization is a trade-off. Like the unsupervised information bottleneck theory ($\beta$VAE), it is desired to retain information about the data distribution ($\max I(\textbf{Z}; \textbf{X})$) and forget the information for identity samples ($\min I(\textbf{Z}; i)$). For the multi-view scenario, we seek to retain the multi-view data distribution information ($\max I(\textbf{Z}; \textbf{X}^{(1)},\textbf{X}^{(2)})$), and want to forget the single-view information ($\min I(\textbf{Z}^{(v)}, \textbf{X}^{(v)})$), and in order to match $\textbf{Z}$ and $\textbf{Z}^{(v)}$, we propose $\max I(\textbf{Z}; \textbf{Z}^{(1)}, \textbf{Z}^{(2)})$ as a trade-off glue (related proof in Proposition 3.1). There are two cases, when the information entropy of $\textbf{Z}$ is fixed, maximizing $I(\textbf{Z};\textbf{Z}^{(1)}, \textbf{Z}^{(2)})$ will increase the information entropy of $\textbf{Z}^{(1)}$ and $\textbf{Z}^{(2)}$. When the information entropy of $\textbf{Z}^{(1)}$, and $\textbf{Z}^{(2)}$ is fixed, maximizing $I(\textbf{Z};\textbf{Z}^{(1)}, \textbf{Z}^{(2)})$ will reduce the information entropy of $\textbf{Z}$. However, both cases are learning cross-view diversity.
**For ablation study**, in line 242 of Section 5.1 Experimental Settings, we construct two variants IMC-v1 and IMC-v2 to compare with IMC, and showed the experimental results in Table 1 and Table 2. The performance of using KL divergence clustering term has been significantly improved, while the use of information shift term has a small improvement. And in Section 5.2 Performance Analysis, we concluded that end-to-end models help to learn more discriminative representations.
**A3:** Eq. (3) is clustering based on single-view information bottleneck, while the proposed information-theoretic multi-view clustering (IMC) is for multi-view data. The maximum information shift term is used to connect the unified multi-view representation $\textbf{Z}$ and the view-specific representation $\textbf{Z}^{(v)}$ to achieve cross-diversity.
**A4:** Following your suggestion, we further test our model on a large-scale multi-view dataset.
**NUS-WIDE-Object (NUS)** is a dataset for object recognition that consists of 30000 images in 31 classes. We use 5 features provided by the web-site, i.e. 65 dimension color Histogram (CH), 226 dimension color moments (CM), 145 dimension color correlation (CORR), 74 dimension edge distribution, and 129 wavelet texture.
| Datasets | Metrics | DMVAE | MIB | CMIB-Nets | Completer | IMC-v1 | IMC-v2 | IMC |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| | ACC | 18.24$\pm$0.64 | 15.24$\pm$0.56 | 14.25$\pm$0.35 | 16.88$\pm$0.46 | 17.60$\pm$0.40 | 14.38$\pm$0.36 | $\textbf{19.78} \pm \textbf{0.47}$ |
| NUS | NMI | 19.87$\pm$1.05 | 15.75$\pm$0.42 | 14.65$\pm$0.40 | 17.68$\pm$0.35 | 18.67$\pm$0.62 | 15.55$\pm$0.74 | $\textbf{21.12} \pm \textbf{0.34}$ |
| | ARI | 6.78$\pm$0.81 | 5.83$\pm$0.84 | 4.26$\pm$0.74 | 6.24$\pm$0.64 | 5.68$\pm$0.48 | 5.68$\pm$0.48 | $\textbf{8.32} \pm \textbf{0.41}$ |
|
From the results, it can be seen that NUS is a challenging dataset, and there is still much room for improvement in the clustering performance of all the models The optimal and sub-optimal performance achieved by the proposed IMC and DMVAE model demonstrates that incorporating the clustering loss to bootstrap the representation learning can significantly improve the clustering performance, which can be consistently concluded from the ablation experiments comparing IMC, IMC-v1, and IMC-v2 .
**A5:** I am sorry for the serious typos, here should be to minimize $I(\textbf{Z}^{(1)}; \textbf{X}^{(1)})$ and $I(\textbf{Z}^{(2)}; \textbf{X}^{(2)})$, for maximum information compression of each view.
---
Rebuttal Comment 1.1:
Comment: I am pleased to acknowledge that the authors have effectively addressed my concerns. Their response has significantly clarified the contribution and technical details of the work. Additionally, they have incorporated additional experiments for a more comprehensive evaluation. Considering these improvements, I decided to revise my rating from "Borderline reject" to "Weak Accept".
---
Reply to Comment 1.1.1:
Comment: Thanks for taking your valuable time to read and respond in a timely manner. Your constructive comments contribute to the improvement of our work. | Summary: In this paper, the authors introduce representation learning with the unsupervised information bottleneck to multi-view clustering. Based on the framework of information bottleneck, the authors theoretically summarize 3 key properties (comprehensiveness, concentrate, and cross-diversity) required by multi-view clustering representation. Finally, a DEC module is added to obtain the clustering assignment.
Strengths: - The paper is well-motivated and easy to follow. The pointed three desiderata are convincing although there may be some typos in the definition.
- The paper is technically sound.
Weaknesses: - There are many typos, especially in key concepts. For example,
- there may be plenty of typos of notations in Definition 3.1, which causes my major concern about the soundness of this paper.
- In Line112-113, "minimize $I(Z^{(1)}, X^{(2)})$ and $I(Z^{(1)}, X^{(2)})$" seems wrong.
- The experiment may be a little insufficient. For example,
- The ablation experiments to study which 3 different desiderata are more important are missing.
- The visualization of learned $Z$ is also helpful to improve the quality. Since the primary contribution is the representation learning and there seems no apparent contribution to the clustering module, it is important to show whether the quality of representation is better.
- The running time is missing.
- The used mathematical techniques to derive the variational bound lack novelty (widely used in VIB, GIB, etc.).
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - (The concept of comprehensiveness) In Definition 3.1, the definition of comprehensive seems wrong: " can be predicted by $Z$".
- It may be somewhat confusing to connect $\min I(Z, X)$ and "eliminating redundant information of each view". It should minimize the irrelevant information between $X$ and $Z$.
- In Line 116, why is Eq. 4 used for *comprehensiveness* and *concentrate*? Is it the proposed cross-diversity?
Overall, the paper has some merits but requires significant proofreading. I believe the paper could be better but the quality of this version is unsatisfactory. I'd like to update my score after reading the response and other reviews.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: The limitations are not discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your thoughtful and constructive feedback.
**Q1:** In Line112-113, "minimize and $I(\textbf{Z}^{(1)}; \textbf{X}^{(2)})$ and $I(\textbf{Z}^{(1)}; \textbf{X}^{(2)})$" seems wrong.
**A1:** I am sorry for the serious typos, here should be to minimize $I(\textbf{Z}^{(1)}; \textbf{X}^{(1)})$ and $I(\textbf{Z}^{(2)}; \textbf{X}^{(2)})$, for maximum information compression of each view.
**Q2:** The ablation experiments to study which 3 different desiderata are more important are missing.
**A2:** I'm sorry that I did the ablation experiment and analysis, but I didn't set up a chapter on ablation study in the manuscript, which made it easy to be ignored. We will highlight the ablation study more clearly in the new version.
In line 242 of Section 5.1 Experimental Settings, we construct two variants IMC-v1 and IMC-v2 to compare with IMC, and showed the experimental results in Table 1 and Table 2. The performance of using KL divergence clustering term has been significantly improved, while the use of information shift term has a small improvement. And in Section 5.2 Performance Analysis, we concluded that end-to-end models help to learn more discriminative representations.
**Q3:** The visualization of learned $\textbf{Z}$ is also helpful to improve the quality. Since the primary contribution is the representation learning and there seems no apparent contribution to the clustering module, it is important to show whether the quality of representation is better.
**A3:** Thanks for your constructive suggestions. The representation visualization of multi-view learning can intuitively see the latent space structure after data dimensionality reduction, which is usually strongly related to the clustering metrics. We visualize the latent representations of all datasets and compare them with other multi-view representation learning methods, and we will present the results in the supplementary material due to the page limit of the main text.
**Q4:** The running time is missing.
**A4:** We tested the running time of 10 runs for 20 epochs on the UCI-digits dataset on a computer with an NVIDIA RTX 2070 GPU. The results show that Completer has the fastest running time, followed by our proposed model, which is close to MIB, because the multidimensional Jensen-Shannon estimator with high computational complexity is used.
| | DMVAE | MIB | Completer | IMC |
| ---- | ---- | ---- | ---- | ---- |
| Running Time /s | 12.58 $\pm$ 0.12 | 8.41 $\pm$ 0.15 | 3.65 $\pm$ 0.20 | 7.20 $\pm$ 0.12|
|
The results show that Completer has the fastest running time, followed by our proposed model, which is close to MIB, because the multivariate Jensen-Shannon estimator with high computational complexity is used.
**Q5:** (The concept of comprehensiveness) In Definition 3.1, the definition of comprehensive seems wrong: " can be predicted by
$\textbf{Z}$".
**A5:** There may be a bit of ambiguity here. The comprehensiveness means that multi-view observations $ \textbf{X}^{(1)}, \textbf{X}^{(2)}, ..., \textbf{X}^{(V)}$ can be predicted/generated by the unified representation $\textbf{Z}$.
**Q6:** It may be somewhat confusing to connect $\min I(\textbf{Z}; \textbf{X})$ and "eliminating redundant information of each view". It should minimize the irrelevant information between $\textbf{X}$ and $\textbf{Z}$.
**A6:** In the unsupervised setting, without labels, it's not known which information in the data is irrelevant to the task. It is desired to keep the principal and intrinsic information about the sample and eliminate redundant information such as background and noise.
**Q7:** In Line 116, why is Eq. 4 used for comprehensiveness and concentrate? Is it the proposed cross-diversity?
**A7:** Like the unsupervised information bottleneck theory ($\beta$VAE), it is desired to retain information about the data distribution ($\max I(\textbf{Z}; \textbf{X})$) and forget the information for identity samples ($\min I(\textbf{Z}; i)$). For the multi-view scenario, we seek to retain the multi-view data distribution information ($\max I(\textbf{Z}; \textbf{X}^{(1)},\textbf{X}^{(2)})$), and want to forget the single-view information ($\min I(\textbf{Z}^{(v)}, \textbf{X}^{(v)})$), and in order to match $\textbf{Z}$ and $\textbf{Z}^{(v)}$, we propose $\max I(\textbf{Z}; \textbf{Z}^{(1)}, \textbf{Z}^{(2)})$ as a trade-off glue (related proof in Proposition 3.1). There are two cases, when the information entropy of $\textbf{Z}$ is fixed, maximizing $I(\textbf{Z};\textbf{Z}^{(1)}, \textbf{Z}^{(2)})$ will increase the information entropy of $\textbf{Z}^{(1)}$ and $\textbf{Z}^{(2)}$. When the information entropy of $\textbf{Z}^{(1)}$, and $\textbf{Z}^{(2)}$ is fixed, maximizing $I(\textbf{Z};\textbf{Z}^{(1)}, \textbf{Z}^{(2)})$ will reduce the information entropy of $\textbf{Z}$. However, both cases are learning cross-view diversity.
---
Rebuttal 2:
Comment: I thank the authors for the response and the review partially addressed my conerns (such as time and crucial formulation errors).
I have read other reviews as well. I may not agree with that the paper present a new theory for multi-view clustering. The information bottleneck has been studied well and it is also applied to the unsupervised scenes. Moreover, the writing quality is poor so it may need to be polished more carefully before the formal publication.
So I retain my initial rating.
---
Rebuttal Comment 2.1:
Comment: First of all, thanks for your professional and pertinent comments, we try to communicate amicably to minimize the comprehension gap, and finally reach a consensus!
As presented in the chapter "Related Work", the information bottleneck principle has been extensively studied, as has recent research on multi-view clustering. Representative works include MIB [1] and Completer [2], but they are all based on a strict assumption that multi-view shared information is learned as task-relevant information.
The Information-theoretic Multi-view Clustering (IMC) approach proposed in this paper is based on the general assumption of multi-view data that task-relevant information exists in both view-sharing and view-specific parts. Theoretical analysis shows that the introduction of novel information shift term enables cross-view diversity, and extended experiments demonstrate that IMC is more robust both on various multi-view datasets and on noisy datasets.
We will carry out professional and native language polishing of the manuscript before the final version, to resolve misunderstandings caused by improper expression.
We are keen to further solve the remaining issues after rebuttal, and look forward to your follow-up constructive comments to help us improve this work!
[1] Marco Federici, Anjan Dutta, Patrick Forré, Nate Kushman, and Zeynep Akata. Learning robust representations via multi-view information bottleneck. In *ICLR*, 2021.
[2] Yijie Lin, Yuanbiao Gou, Xiaotian Liu, Jinfeng Bai, Jiancheng Lv, and Xi Peng. Dual contrastive prediction for incomplete multi-view representation learning. *IEEE Trans. Pattern Anal. Mach. Intell.*, 2022. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Mechanism Design for Collaborative Normal Mean Estimation | Accept (spotlight) | Summary: - The authors study normal mean estimation in a collaborative setting. N agents each aim to obtain a good estimate for the unknown mean while incurring as little cost for data acquisition as possible.
- The authors show that a naive data aggregation mechanism leads to freeriding. Then, they propose a novel mechanism in which a central entity collects all data and only sends players noisy information about other players' samples, with the magnitude of the noise depending on the deviation from a player's sample to others' samples.
- The authors show that their mechanism fulfills desirable properties (Incentive compatibility, Individual rationality and Efficiency) in the single-dimensional Gaussian case and retains approximate versions of these properties more generally.
**I am keeping my positive score after reading other's reviews and the rebuttal.**
Strengths: - The paper is generally well written and offers a relevant new take on data-sharing incentives in federated learning.
- The proposed mechanism does not rely on side payments and is therefore quite flexible.
- The authors consider a very general class of player strategies and still manage to prove strong theorems.
Weaknesses: - The placement of "recommended strategies" within the formalism is a bit confusing, as there does not appear to be any difference in the analysis of recommended vs non-recommended strategies.
- As the recommended, desirable strategy profiles are only shown to be Nash equilbria but not dominant strategies, some discussion of other equilbria would be nice (in particular, are all equilbria essentially equivalent to the recommended strategies modulo some simple transformation, or are there very different equilbria?)
- Some very relevant citations seem to be missing from the related work section. In particular, the proposed mechanism is very similar to the idea of [peer prediction](https://pubsonline.informs.org/doi/abs/10.1287/mnsc.1050.0379) and connections to that strain of work could be highlighted better.
- Minor comments:
- 256-258 contains "simply" twice in the same sentence
- 352 seems to contain a grammar error/typo ("given her")
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is it necessary for the affordance to contain $\eta$, when the players can compute that from $Y$ and $Z$?
- Should $2\sigma \sqrt{c/m}$ in line 221 use $cm$ instead of the fraction?
- Why is Algorithm 1 using $m\geq 4$ as the cutoff, while 221 suggests a phase transition for $m\geq 3$?
- Is the general form of the allowances necessary? Since the goal is mean estimation I would imagine that allowances that only use the mean (like in section 5) could simplify the exposition and analysis.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: - The scope of the paper is firmly limited to the (fundamental) problem of mean estimation.
- All players are assumed to be perfectly homogeneous in the sense that they have the exact same data acquisition costs and their samples have the same conditional means and variances. However, I do expect the proof intuitions to extend beyond that case.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions.
*Recommended vs non-recommended strategies:* We have shown that the recommended strategies are a Nash equilibrium, which means that when all other agents are following the recommended strategies, then the best response for an agent is to also follow the recommended strategy. Moreover, if all agents follow the recommended strategies, the social penalty is small.
*Regarding other equilibria:* This is an interesting question. Due to the complexity of the space of estimators, it's not clear to us what other optimal estimators could lead to other Nash equilibria. You are correct though, that there are multiple Nash equilibria that could be similar to each other. For instance, all agents can add some constant $a$ to each sample they collect, and then subtract $a$ from the final estimate. Each value of $a$ would correspond to different Nash equilibria.
*References on peer prediction:* Thank you for mentioning the references on peer prediction. We will add these to our related work section.
*Inclusion of $\eta$:* We chose to present it this way for clarity. We thought it would be clearer if all numerical quantities in Algorithm 1 were hidden (it is crucial that the random values in line 14 be hidden), and the mechanism reveal what is necessary. See lines 142--143. However, you are correct that if $\alpha$ was also published as part of the mechanism, then revealing $\eta$ would not be necessary.
*Line 221:* The current formula in line 221 is accurate.
*$m\ge3$ vs $m\ge4$:*
$m\ge3$ is the threshold at which the agent achieves the smallest penalty by collecting no data points when other agents are submitting $\sigma/\sqrt{cm}$ points in the mechanism described in line 209.
While $m\ge4$ is the threshold at which the equilibrium in the mechanism described in line 209 is approximately efficient with $PR \le 2$.
*Is the general form of allowance [allocations] necessary? Why not use something similar to Section 4.2?* We study general allocation spaces for three reasons:
- First, we do wish to point out that even if the mechanism returns a mean, the agent need not simply accept it (like in Section 4.2). The agents could very well post-process this estimate to obtain a more accurate estimate. In fact, in Appendix 4.2, we show that under the more general strategy space, it is possible for agents to use a convex combination of the mean returned by the mechanism and the mean of their original dataset to reduce their MSE. They can then leverage this insight to collect less data and obtain a lower overall penalty.
- Second, you are correct that a mechanism that simply returns the estimate in (8) instead of the 3-tuple, will have a similar guarantee to our current mechanism (agents will submit truthfully and accept this estimate). However, this will at best marginally simplify the presentation of the mechanism, while the analysis would be exactly the same.
- Third, this form demonstrates that we have studied the problem in its fullest generality and better elucidates our contributions. For instance, if we had only studied mechanisms which return an estimate (i.e a scalar value), it could lead to other questions such as, "Can you return something else instead of a simple estimate and get better guarantees?".
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
Regarding recommended strategies: Do I understand correctly, that "recommended strategy" is just a name given to the strategies at the particular analyzed equilibrium, and that there is no formal meaning of the term beyond that?
Regarding other equilbria: I see that these simple other equilbria exist. Do you have an intuition on whether there might also be equilibria that are less socially optimal (i.e. agents corrupt the samples in a way that can not be corrected for)?
---
Reply to Comment 1.1.1:
Comment: Thank you for your questions and your patience. Both these questions are related, so we will answer the second question first.
*Are you aware of other Nash equilibria?* Despite trying, we are unable to find other Nash equilibria that are less social optimal and are hence unable to bound the price of *anarchy*. We will clarify this in the manuscript.
*Regarding recommended strategies:* Yes, you are correct that the 'recommended strategies' are the Nash equilibrium which we have analyzed and where the efficiency is also very good. Recommending a set of strategies is a way for the mechanism designer to communicate a good Nash equilibrium to the agents (as it may not be obvious at the outset). | Summary: The paper considers a collaborative mean estimation
setting where a set of agents can all collect
i.i.d samples from an underlying Gaussian distribution,
and their goal is to share data with each other in order
to estimate the mean of the distribution.
Each agent has a fixed cost for collecting each sample
and its negative utility equals the sum of the estimation error and
the sampling cost.
Naturally, each agent has an incentive to free-ride
and under-collect, reporting false data to
the data collecter and using other agents' data instead.
The goal is to design a mechanism that can incentivize
the agents to share their data more truthfully, in order
to ensure successfull mean estimation.
The authors propose an
incentive compatible and individually rational mechanism based on the idea
of "punishing" agents whose estimates deviate too much
from the true mean by sharing false data with them,
and use a minimax estimator on the collected
dataset to estimate the mean.
The authors show that the mechanism's social
penalty is at most twice the global optimum.
Strengths: The authors propose an interesting problem, and provide a nice solution to it.
The paper is well-written.
The result is somewhat surprising and I believe may be important for future research.
Weaknesses: The presentation of the result suggests that the mechanism is incentive
compatible globally and not just in the equilibrium point. This
should be clarified earlier in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors point to a reference for their definition of IC and IR?
Specifically, the definitions I'm familiar with (e.g., see [1]) require the mentioned properties
to hold in any point, not just at the equilibrium.
In any case, it would be better to emphasize that the mentioend claims only hold
at the equilibrium early on. As is, the abstract and introduction suggest
to me that the properties hold globally, which is a much stronger claim.
Can the authors provide a formal proof of the footnote in Section 4? While the claim is not central to their paper,
I think it should be either be proved formally, or the authors should adjust the language of the footnote
so it is clear that they do not have a proof.
Also, regarding the appendix, is there any reason the authors have chosen to put appendix F in the end, rather than the beginning?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions.
*Definition of IC and IR:* We agree with you on this. There are two common notions of IC that are used in the literature, dominant strategy (DSIC) and Bayes-Nash (BNIC). We don't have a DSIC and while we have a Nash equilibrium, we are clearly non-Bayesian so we couldn't use BNIC either. However, we agree that IC might be taken to mean DSIC, so we will switch to "Nash Incentive Compatibility" as a compromise.
As for IR, we will also clarify that it only holds at the Nash equilibrium. (We do wish to point out that it may be difficult to design an IR mechanism regardless of the strategies of the other agents. For instance, if all agents except one agent are malicious and submit false data, then the lone agent will lose out.)
*Proof of footnote*: Are you implying the footnote on page 4 (and not footnote 4)? Yes, we will include proof sketch. In fact, our current analysis uses a sequence of Normal priors and Bayesian estimators to analyze the maximum (frequentist) risk. We believe these calculations can be used as is, to prove a result for Normal priors. The techniques can also be applied to non-normal priors; however, the results will be prior-dependent, and if the prior is complex, the results may be hard to interpret.
*Order in the appendix:* Appendix F consists of technical (algebraic) lemmas that are important but not central to most of the proofs. For a more coherent presentation, we decide to defer them to the end.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I don't have any additional questions. | Summary: The author consider the problem of designing a data-sharing mechanism that encourages a group of $m$ agents to share their iid collected samples truthfully and further uses the shared data to refine their estimations of the normal mean. To ensure truthful reporting, the mechanism introduces additional noise into the shared data. The amount of noise is determined based on the discrepancy between the mean of an agent's reported data and the mean reported by other agents, with the noise variance increasing proportionally. The authors demonstrate that this mechanism achieves both individual rationality and incentive compatibility. In addition, the mechanism is also efficient compared to the minimum social penalty. The authors further extend the result to estimation of the mean in high-dimensional settings with a bounded variance.
Strengths: The problem of data sharing is well motivated by real world applications and has gained great attention in recent years. The difficulties associated with this type of problem often involve determining the appropriate pricing strategy for data to incentivize genuine data collection while discouraging data fabrication. This paper addresses the problem in an elegant way in the language of normal mean estimation.
The authors explore the methods of inserting noise into the reallocated data whose variance depends on the quality of an agent’s report. The idea has occurred in many previous work, but the authors show that by carefully designing the noise level, the mechanism can be at the same time IC, IR and efficient. Moreover, the results hold for any number of agents and can also be extended to non-normal distribution. I believe that this work may be of interest to the community.
The paper is well-written and the results are sound.
Weaknesses: It seems that the current result is quite limited to the specific form of the social/individual penalty. The proof of the IC also seems to heavily rely on this specific structure. Can the results be extended to more general penalty functions?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What if there exists a common prior of $\mu$? Does the result still hold in general?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No concerns here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions.
*Specific form of penalty:* Yes, you are correct. We do believe that these results can be extended to other penalty forms and supervised learning problems, but we may need to relax from an exact to an approximate Nash equilibrium. This is because for other penalty forms, it is hard to design exactly minimax-optimal estimators and it is customary to settle for rate(order)-optimal estimators. See 2nd para of Section 5. We are working on extending the ideas in this work to more general settings.
*When there is a common prior $\mu$:* Yes, our results can be adapted when there is a common prior -- see footnote 1. In fact, our analysis constructs a sequence of Normal priors to analyze the maximum risk, so if this prior is Normal, you can directly use the calculations in our proofs to obtain a Bayesian result which depends on the priors. The techniques can also be applied to non-normal priors; however, the results will be prior-dependent, and if the prior is complex, the results may be hard to interpret.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have no further questions. | Summary: The paper designs a mechanism that collects data from n agents to estimate the mean of a Gaussian distribution. The agents incur costs to collect data, they can misreport data, and they strategically choose the level of effort and the data to report. They propose a mechanism that corrupts the returned datasets according to the difference between an agent's reported data and others' data. They prove that their mechanism is IC, IR, and achieves a 2-approximation of the optimal social welfare.
Strengths: The paper studies an interesting problem and the presentation is clear.
Weaknesses: The result may be overshadowed by the results in (Cai et al., 2015), which is not cited in the current paper. Although the problem is formulated without payments, it is pretty much a mechanism design problem with payments, because the designer knows the agents' utility function exactly and can add arbitrary noise to adjust an agent's utility freely. This is very similar to adding a numerical payment to the allocation function, which makes the problem very close to (Cai et al., 2015). However, (Cai et al., 2015) achieves a much stronger result: they are able to achieve optimal social welfare at a dominant strategy equilibrium. In addition, when their mechanism is used, the agents do not have the incentive to misreport data, which means that truthfully reporting data will be a weak BNE. In this paper, only 2-approximation is achieved at a BNE, which is pretty far from the potential optimal. It may not be straightforward to apply (Cai et al., 2015) because adding noise can only give negative payments, but it is also not clear whether (Cai et al., 2015) can yield a better result than 2-approximation.
Cai et al., 2015, "Optimum Statistical Estimation with Strategic Data Sources"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is it possible to use the VCG-like mechanism from (Cai et al., 2015) in your problem?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for mentioning the paper by (Cai et al., 2015). This is a nice and relevant paper that we were not aware of, but will be sure to include it in the revision. While Cai et al., 2015 study a general supervised learning problem, when applied to mean estimation, their setting is more restrictive than ours.
The **primary** difference is that the agents' strategy spaces are different. In the main part of our paper (Section 2 and 3), the agents can choose how much data to collect, what to submit, and most importantly how to estimate the mean from the information received from the mechanism. In (Cai et al., 2015), the agents can only choose how much effort to exert (where the effort is similar in spirit to the number of data points in our paper). This means that the mechanism design problem is significantly more challenging in our setting. Based on this, we will highlight the key differences below, while also offering rebuttals to the reviewer's claims.
1. *the setting is similar to payments in Cai et al, since the designer knows the utility and can add noise freely:* There are two key components to designing a mechanism under our strategy space. First, the mechanism should decide 'how much' to give away (which is similar to payments), and second 'how to give away', since the agents can use the information they receive to decide how to estimate the mean. While several prior work have studied the former component (e.g Cai et al, Karimireddy et al, Blum et al), the latter component (which we believe is significantly more challenging), has not been studied to our knowledge.
To illustrate this further, suppose the mechanism decides on a 'payment' and returns a mean estimate that corresponds to this payment. However, there is no guarantee that the agent will simply accept this mean. The agent may post process this estimate, for instance based on the original data it collected, and aim to achieve a higher 'payment' than the mechanism intended. Agents can in fact be even more strategic; they can only submit part of the data they collected, and use the remaining hidden data to refine their estimate. The mechanism should account for this strategic behavior - this explains why we had to design careful allocation spaces, corruption mechanisms, and minimax optimal estimators for the corrupted datasets.
2. *In Cai et al, the agents do not have the incentive to misreport data:* We would like to respectfully point out that this was not proved in their paper. As we pointed out above, they limit the strategy space to the amount of effort. We are looking at a 17-page COLT 2015 paper; please let us know if there is an updated version.
(We do however believe that it is possible to prove this result for a modified version of their mechanism in a simpler setting using techniques from our paper; more on this below)
3. *Cai et al are able to achieve optimal social welfare [penalty]:* First of all, we would like to point out that their definition of social penalty is different from ours.
Moreover, there is no way to reduce one to the other as we are summing over all agents' errors, and each agent could have a different error since they may use different estimators and may have different allocations.
* However, it is worth observing that when we were first studying the problem in Section 4.2 (a simpler strategy space than Section 2 and 3 where the agents have to accept a mean estimate from the mechanism), we started with exactly the same corruption method to the one in Cai et al. We were able to prove that it was truthful; however, it was able to achieve, at best, a $1.5\times$ factor of the global minimum according to our definition of the social penalty. We were able to improve the mechanism with a different corruption strategy to get $(1+\epsilon)\times$ factor for arbitrarily small $\epsilon$ while still ensuring truthful reporting. This is the mechanism presented in Section 4.2.
* To summarize, when restricted to mean estimation, our results in 4.2 are already stronger than Cai et al: it achieves a lower social penalty according to *our* definition, and we prove that agents should submit the data truthfully (they did not). We wish to emphasize that our main contributions in Section 2 and 3 are **significantly** more challenging than in 4.2.
4. *Cai et al achieve a dominant strategy equilibrium:* while it is possible to prove a DSE under a restrictive strategy space, it is easy to see that there may be multiple (infinite) Nash equilibria in any nontrivial mechanism if we allow agents to alter the data they submit and then estimate/post-process the information they receive. For instance, all agents can add some constant $a$ to each sample they collect, and then subtract $a$ from the final estimate. Each value of $a$ would correspond to a different Nash equilibrium.
Hopefully, these points alleviate your concerns that the results of Cai et al. "overshadow" this work. We will make sure to cite that paper and clarify the differences.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. But my concerns remain.
First of all, it can be proved that the agents do not have the incentive to misreport data in (Cai et al., 2015), if we adopt the same worst-case analysis used in your paper. This is because the worst-case expected payment (equation 3) is maximized when the agent truthfully reports in the setting of normal mean estimation. If you look at the decomposition of the expected payment (the first equation below equation 3) and look at the term inside the expectation. First, the expectation of the third term is zero when the data is i.i.d. Second, agent i does not have control over the second term. And finally, the worst-case expectation of the first term is minimized when truthfully reporting, because as you cited in the paper, the sample mean is the minimax optimal estimator.
Therefore, even if the model of Cai et al. (2015) does not explicitly allow data modification, their mechanism guarantees IC in your setting.
For the definition of social welfare, I am not convinced that your objective function is fundamentally different from that of (Cai et al., 2015). They are both expected error + costs. The social welfare definition used in (Cai et al., 2015) is the standard definition commonly employed in mechanism design literature. There seems to be little justification for straying from this convention.
---
Reply to Comment 1.1.1:
Comment: Thank you for your questions.
1. To begin with, we would like to reiterate that your concerns above relate to Section 4.2, which is only a small part of our contribution (1/2 page). Our primary contributions are in Sections 2 and 3, where agents can use the data/estimate they received the way they wish to, and are not restricted to accepting the estimate provided by the mechanism. As we highlighted in our previous reply, the mechanism design problem is significantly more challenging since the strategy space is much richer (also see point 4 below). In any case, below, we shall address the questions you have raised about Section 4.2.
2. *Definition of social welfare [penalty]:*
The two notions of social penalty become very different when you account for the fact that agents are not rewarded by payments, but by a model. To understand this, let us first look at the social penalty in Cai et al (using the notation in our mean estimation setting):
\\[
\sup_\mu E[(\hat\mu-\mu)^2] + \eta\sum_{i=1}^m c n_i,
\\]
where, $\hat\mu$ is the estimate of the *mechanism*, $c n_i$ is the effort by each agent, and $\eta$ is a trade-off parameter in their setting. In contrast, the social penalty for us is,
\\[
\sum_{i=1}^m\sup_\mu (E[(\hat\mu_i-\mu)^2] + c n_i).
\\]
where, $\hat\mu_i$ is *the estimate assigned by the server to agent $i$* and $c n_i$ is the effort by each agent. This definition is justified in settings where agents are interested in the accuracy of their own estimates and not in any payments. The difference in the settings lies in the fact that when there are payments involved, the payments do not affect the social penalty (as the mechanism's negative is the agent's positive). However, if you are rewarding the agent with a better/worse model, that affects the social penalty in our setting.
To illustrate this further, suppose one agent does not collect a sufficient amount of data. In Cai et al, the mechanism will still choose the best possible estimator $\hat\mu$ for itself to minimize the social penalty. It will pay only a small amount to the agent to penalize its lower effort, but this does not affect the penalty.
On the other hand, in our setting, the mechanism cannot simply assign this best possible estimate $\hat\mu$ as each agent's estimate $\hat\mu_i$ as it also needs to reward/penalize agents via the estimate $\hat\mu_i$.
So, our mechanism should offer a poor estimate $\hat\mu_i$ to this agent, which will reduce the social penalty.
* As we mentioned in our previous reply, when we studied a similar corruption strategy used by Cai et al in our setting, the best we could obtain was a $1.5\times$ factor over the global minimum in our setting. We had to improve the mechanism further, to obtain a $(1+\epsilon)\times$ factor.
* We do wish to clarify that we view the results in Cai et al as complementary to our results in Section 4.2 (in case the phrasing in our previous reply was confusing). We both study different settings and design arguably optimal mechanisms for our respective problems.
3. *IC proof intuition:* In our proof of Theorem 3, we do use a similar intuition to prove IC to the one that you mentioned above, although we have to deal with a more complex corruption strategy and the differences in the settings highlighted above. However, as we mentioned in our previous post, we do not think this proof is anywhere nearly as challenging as the proof of IC for our main result (Theorem 1) in Section 2 and 3. See the proof sketch in Section 3.1. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies collaborative normal mean estimation, where strategic agents collect i.i.d samples from a normal distribution at a cost. This paper designs a "truthful" mechanism such that the strategic players will try to collect data instead of doing some "random" thing that harms the system and benefits themselves for federated learning systems.
Strengths: The problem it studies is very interesting, and has the potential impact for further directions and research. The theory is solid. I think this kind of research that consider the robustness of the system / robust statistics will gain much attention for federated learning (and related) community.
Weaknesses: I am not very familiar with AGT, and thus will not point out the weakness on the theory part. However I do have some minor concerns with respect to the model and motivation.
+ In this paper, the author only considers estimating the mean of Gaussian distribution from samples, which is a one-round scenario. The clients can communicate with each other and send the raw data. While in real applications, it is always not good to directly send the data because of the privacy issue. Thus the optimization problem may include several round of interactions. Is it possible to extend the current single-round results to a multi-round protocol?
+ In federated learning or collaborated data-sharing applications, different clients may have different data-distribution (in the current paper, different clients may have different mean $\mu_i$). Thus, some clients may also untruthfully report the data to take advantages for their own sake. The current model may be too easy (all clients consider the same data distribution). Is it possible to extend the current results / mechanism to the non-IID setting where different clients may care about different distribution?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and ideas.
*Multi-round mechanisms and privacy:* We are actually looking at multi-round mechanisms now :) While we can build on our current paper, there is still more work needed to solve this problem. As for privacy, we believe the rigorous way to study this would be to include it as part of the agent's penalty (or utility) in the formulation. We think this is an interesting avenue for future work.
*Heterogenous agents:* Studying heterogeneity will be interesting, both in terms of what the agent can collect and also in terms of what each agent wishes to estimate. We believe a 'complete' mechanism will use something similar to our mechanism when there is overlap between the agents, and some other protocol when there is no overlap. Our focus on this paper was to isolate and study the free-riding issue which is quite challenging as is.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I think the paper in general is interesting and makes enough contribution (though I am not super familiar with the AGT backgrounds). I will keep my score and confidence. | Summary: The authors study a collaborative normal mean estimation problem, where m strategic agents are trying to estimate the mean \mu of an unknown normal distribution with given variance. The agents can acquire samples drawn from the distribution at a cost of c per sample. In addition, each agent can share their samples with a mechanism, which will reward them by providing some of the samples other agents have submitted. Given that the agents are strategic, they can also omit or alter their samples prior to sharing, or even fabricate additional samples at no extra cost.
The strategy of each player therefore is how many samples to obtain, what to report and how to estimate \mu given the response of the mechanism. The cost of the agent has a worst case flavour: it is the cost of the samples plus the maximum expected quadratic estimation error (with the supremum taken over all possible true values of \mu and the expectation over the samples received directly or by the mechanism, which further depends on the strategies of other agents). The solution concept used is the Nash equilibrium. The mechanism needs to satisfy Incentive Compatibility (IC) and Individual rationality (IR). IC is about the players following the 'honest' strategy, which in this case is suggested by the mechanism and includes taking a certain number of samples and honestly sharing all of them. If IC holds (more on that in the questions), then following this honest strategy is the best option for any agent, as long as every other agent is also honest. IR dictates that for the honest strategy, every player should have lower cost than if playing in isolation.
The mechanism also needs to efficient, which means that the expected sum of costs (given the recommended IC strategy) should be a close approximation of the 'optimal' cost, which is the minimum that can be achieved by any mechanism and agent strategy (ignoring IC and IR constraints). In this case, it is shown that the optimal non-strategic mechanism simply collects samples are forwards them to all agents, who use a minimax estimator.
The designed mechanism called C3D collects samples from agents and then for each agent i: it splits all collected samples into two sets Z_i and Z_i', where Z_i contains min(|Y_i|, \sigma / \sqrt{c m}) samples where Y_i are the samples submitted by agent i. The samples Z_i' are then perturbed by adding random noise, increasing in the difference between Y_i and Z_i. This mechanism is both IC and IR (which is implied since submitting no samples is a valid strategy) and has an approximation ratio of 2.
The authors also consider extensions where the agents have to submit their true samples or where the mechanism itself calculates \mu (thus the agents cannot misreport and then ignore their input from their estimation). In the first case, a simpler mechanism is completely efficient, while in the second a 1+ \epsilon mechanism can be designed for all \epsilon > 0.
Strengths: The setting of collaborative mean estimation is very interesting and particular care has been taken to establish a nuanced model where this question can be meaningfully posed and answered. The presentation of the paper is excellent, with all ideas communicated clearly and in the right order. Before any rigorous proof, an appropriate amount of intuition is provided.
The mechanism itself is very natural and seems robust, while achieving a good approximation.
Weaknesses: No lower bound is provided for the 2-approximation.
Usually, Incentive Compatibility refers to a strategy being optimal no matter what the other players are doing, whereas here the definition given is essentially the same as the Nash equilibrium.
In any sensible mechanism IC would imply IR in this setting. Given that the paper is very well presented this is a minor point, but a bit of content could be cut and simplify the notation by restricting mechanisms wlog.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and suggestions. Yes, we agree it would be interesting to establish the lower bound for the 2-approximation, and leave it to future work.
*Definiton of IC:* We agree with you on this. There are two common notions of IC that are used in the literature, dominant strategy (DSIC) and Bayes-Nash (BNIC). We don't have a DSIC and while we have a Nash equilibrium, we are clearly non-Bayesian so we couldn't use BNIC either. However, we agree that IC might be taken to mean DSIC, so we will switch to "Nash Incentive Compatibility" as a compromise. | null | null | null | null |
Towards a Unified Analysis of Kernel-based Methods Under Covariate Shift | Accept (poster) | Summary: This manuscript presents convergence rates for kernel methods under covariate shift. Results fit quite a general framework, including common classification and regression losses. Two approaches are analyzed: (i) a usual M-estimator and (ii) an importance-sampling-like M-estimator. It is shown theoretically and empirically that the latter outperform the former.
Strengths: The analysis presented in this paper provides interesting theoretical results regarding learning under covariate shift, which is a contemporary topic. The manuscript is well organized; it explains clearly the problem, state the results while discussing the hypotheses and, at the end, illustrates the theoretical findings by a numerical experiment.
I would like to stress that discussions regarding hypotheses are opportune and corollaries provide intelligible results.
The take-home message, stating that the importance-sampling-like estimator is better that the naive one, is interesting and confirms practitioners’ intuition.
Weaknesses: Major remarks:
1) My main concern is about the novelty of the proofs: hypotheses (i) and (ii) look like straightforward tools to link expectations under the source distribution to the target distribution by linearity or Cauchy-Schwarz inequality. I had a very quick glance to the supplementary material and it confirmed this guess (although I admit that I may be wrong). I think that its important, in order to assess the contribution of the paper, that the authors explain the original derivations appearing in the proofs, with respect to techniques used for obtaining similar results without covariate shift (unfortunately, I have no reference in mind).
2) Another (minor) point is that Figure 1 does not seem to verify neither hypothesis (i) nor (ii) since $\phi(x)$ seems to explode when $x \to \infty$. If it is the case, it would be better to find another example (or at least to discuss this point). If it is not the case, it would be informative to explain it.
Some suggestions of improvement:
1) $f^*$ is defined in Section 2.1, before the problem setting in Section 2.2. However, in practice, it corresponds to the optimal function under the target distribution, which is not clearly stated. I suggest to make it clear.
2) Although an informed reader understand definitions Line 113, it is not totally clear that expectations are conditioned by observed data. I suggest to had this information.
3) $D$ could be added after “Finite rank” in Table 1.
4) Line 264, it is not totally clear that “For the moment bounded case” correspond to Figure 3. I suggest to had it.
Typographical remarks:
1) Extra “the” Line 5.
2) “that” instead of “that is” Lines 126, 131 and 132.
3) In Theorems 1-3, $\delta_n$ should satisfy an inequality that involves $\delta$ instead of $\delta_n$.
4) There are $\phi(\textrm x)$ instead of $\phi(\textrm x)^2$ Lines 212 and 223.
5) Full points are missing in captions.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1) What is $f_j$ Line 154?
2) What does $\psi_j \le Cj^{-2r}$ (Lines 201 and 235) mean, given that $\psi_j$ is a function? Is a norm missing?
3) Does the trends (evolution with respect to $n$) observed in Figure 2 (b)-(e) and Figure 3 (b)-(e) is of the order $\left( \frac{\log n}{n} \right)^q$ or $\left( \frac{\log^2 n}{n} \right)^q$ as exhibited in Table 1?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive comments and valuable suggestions! Our point-by-point responses to your comments are given below.
**Major remark 1**
Thanks a lot for your concern with the novelty of our proofs. We admit that our proofs use many classical empirical process techniques, such as the concentration inequality (Lemma C.1.1.) and the symmetrization technique (Eq.(10) in Section C of supplementary material, which is also popularly used for establishing the theoretical results without covariate shift [1,2]. Yet, we want to emphasize that there exist some significant differences in proofs between the traditional regime and the case under covariate shift. Following your suggestion, we have highlighted the proof ideas and motivations under the covariate shift and compared the difference with the case without covariate shift in the revised version. Some key points are summarized as follows:
Firstly, under the classical case without covariate shift, we only need to control the supremum over a functional class of empirical process, which is defined as the empirical sum minus its expectation counterpart, and then, some classical empirical process techniques such as symmetrization technique and Talagrand’s concentration inequality can be directly adopted. Yet, under the case with covariate shift, we need to control the difference between the empirical sum with the samples driven from the source model and the expectation under the target model, as shown in Lemma C.1.2 of Appendix. Thus, we first need to decompose the total error in the proof of Lemma C.1.2. into two parts: the first part $D_1$ is the classic empirical process formulation and the second part $D_2$ is the residual which doesn't exist in the case without covariate shift. For $D_1$, the main difference is that we have an extra importance ratio term and we need some hypotheses (uniformly bounded or moment bounded) on the ratio to bound it using the Cauchy-Schwarz inequality and Lipschitz-continuous assumption. For $D_2$, we use a technical truncation strategy to bound it. The truncation $\gamma_n$ plays a crucial role in balancing the empirical error and the approximation error. In fact, a fast diverging $\gamma_n$ may reduce the approximation error but compromise the empirical error. We use the optimal truncation $\gamma_n=\sqrt{n\beta^2}$ which ensures the empirical error and approximation error converge at the same order up to a $\log$ factor. This proof strategy is significantly different from that under the classical case without covariate shift.
Secondly, to establish the convergence rates for unweighted estimators in both uniformly bounded and moment bounded cases, we can't trivially construct a constraint set associated with the $\mathcal{L}^2$-norm $\||f-f^*\||_T$ under the target distribution, as the classical case without covariate shift. To tackle this problem, we notice the relation between $\||f-f^*\||_S$ and $\||f-f^*\||_T$ as shown in Eq. (10) in Appendix, which motivates us to construct the constraint set $\Theta^\prime(\nu)$ associated with $\||f-f^*\||_S$ as in Lemma C.2.1. Also note that the coefficients $g_j$'s of the expansion for $g=f-f^*$ in the proof of Lemma C.2.1 corresponds to orthonormal eigenfunctions of $\mathcal{L}^2(\mathcal{X},P_x^T)$ instead of the $\mathcal{L}^2(\mathcal{X},P_x^S)$.
Based on Lemma C.2.1, together with the definitions of $\mathcal{Q}(\delta), \mathcal{G}(\delta)$ and $\mathcal{P}(\delta)$ as illustrated in the proofs of Theorem 1 and 2, we successfully establish the convergence rates for unweighted estimators under covariate shift. This proof strategy is significantly different from that under the classical case without covariate shift.
**Major remark 2**
Thanks a lot for your suggestions. In the original version, Figure 1 serves as an illustration of the phenomenon of covariate shift, and indeed, it may not be a good example. It has been replaced by a new example, which satisfies hypothesis (ii), and more explanations have also been added in the revised version. The modified Figure 1 is also attached in the Author Rebuttal pdf.
**Suggestions of improvement**
Thank you for all your suggestions. 1. The reason for defining $f^*$ in Section 2.1 is to introduce the considered loss family. In the revised version, we have clearly stated that the true target function $f^*$ is defined under the target distribution in Section 2.2. 2. We have made it clear in the revised version by clarifying that the expectations are conditioned by the observed data. 3. ``Finite rank D' has been added in 'Table 1. 4. We will rewrite line 264 to “For the moment bounded case as demonstrated in (a) and (d) of Figure 3”.
**Question 1 and 2**
Thanks a lot for pointing out the notation mistakes. Actually, $f_j$ should be replaced by $a_j$ where $a_j$ is defined in Line 153, $\psi_j$ should be replaced by $\mu_j$ where $\{\mu_j\}$'s are the eigenvalues of the kernel function defined in Line 151, and they have been corrected in the revised version.
**Question 3**
Thanks a lot for your comment. In the original version, although the Gaussian kernel is used in the simulation, we do not provide the theoretical result for the kernel class with exponential decay. In the revised version, we have added the explicit convergence rates corresponding to this kernel class in Table 1 which is also attached in the Author Rebuttal pdf. It is thus clear from the table that the trends in Figure 2 (b)-(e) and Figure 3 (b)-(e) almost agree with our theoretical findings where the Gaussian kernel with exponential decay is used. Additional numerical results presented in the supplementary material also confirm the theoretical convergence rates.
**Reference**
[1] Yang, Y., Pilanci, M., & Wainwright, M. J. (2017). Randomized sketches for kernels: Fast and
optimal nonparametric regression.
[2] Lian, H. (2022). Distributed learning of conditional quantiles in the reproducing kernel Hilbert space.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal, which I read carefully. The authors answered persuasively to my concern and provided a new introductory example, which satisfies Hypothesis (ii). I consequently agree to increase my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for your reply and increasing the score! We appreciate your time and effort in reviewing our work. | Summary: The paper provides a unified analysis of convergence properties for different kernel-based estimators under covariate shift. The analysis covers different loss functions and is focused on standard and importance weighted empirical risk estimators. The former are specified in Eq. (1) and the latter in Eq. (3).
The first assumption is pretty standard and requires a uniformly bounded kernel function. The second assumption enforces a locally strong convexity on the expected loss function relative to source and target marginal distributions (source available during training, target assumed to be shifted and available at test time). The assumptions that characterize the distribution shift are given on page 4 (lines 131 and 132): i) in the first case the importance weights are $\alpha$-uniformly bounded, ii) in the second case the second moment of the importance weight function is bounded.
Theorem 1 gives convergence bounds relative to case i) under the assumptions above. Further assumption is made to give a more readable interpretation of the bound in Corollary 1 which ties the convergence rate to kernel spectrum decay.
Theorem 2 gives a similar convergence result in a more difficult case ii), again under the assumptions listed above.
Theorem 3 considers an estimator that uses importance weighted empirical risk estimator, with truncated importance weights to avoid issues with tail samples. It is for case ii) and bounded second moment of importance weights. The latter result indicates much tighter convergence rate than the one in Theorem 2 that considers standard estimator without importance weighting.
Empirical analysis illustrates the tightness of the bounds on synthetically generated learning tasks and a real-case study.
Strengths: While I have not checked the proofs, the theoretical part of the paper is its strongest point. It is also an interesting characterization of distribution shift carried into the bounds and would be interesting to see what other more granular specifications are possible for future studies. A relative comparison between Theorem 2 and 3 also illustrates the utility of truncated importance weighted estimator, which might be important for practical applications.
Weaknesses: Empirical study might be the weakest part of the paper but given its nature should be fine. It might also be interesting to see how relevant are the assumptions on distribution shift relative to practical applications and datasets.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The formulation of theorems should be cleaned as currently there are symbols that have not been introduced properly. For instance, it is unclear what $\delta_n$ refers to here and how it is related to $\delta$. The authors have spent a fair amount of space to illustrate the bounds and allow for readers to build some intuition. However, still the clarity could be a bit improved by moving from Appendix C.4 the part that transforms Eq. (5) to (7). At first, I had the impression that the right hand side just will not converge under the assumption on $c_0$ and $\lambda$.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your dedicated time reviewing our paper and valuable insights! Our point-by-point responses to your comments are given below.
**Weakness**
Thanks a lot for your valuable suggestions. We want to point out that due to the space limit, only a small fraction of numerical experiments are reported in the main text. Actually, a lot of numerical experiments on synthetic data and real data are presented in Section A of the supplementary material, where many other loss functions and multi-dimensional cases are considered. Moreover, we applied the proposed method to a wide range of real datasets and the numerical performance is summarized in Table 1 in Section A of the supplementary material. To some extent, this paper provides a comprehensive study of the numerical performance of the kernel-based methods under various scenarios with covariate shift.
We agree with you that it is very interesting and meaningful to test if there exists a distribution shift in practical applications and datasets and if the distribution shift satisfies the uniformly bounded or moment bounded assumptions. Unfortunately, the relevant approaches have remained lacking to our best knowledge. We decide to leave such a promising topic as potential future work. And it is interesting to note that as shown in our real applications, the TIRW estimator always outperforms the unweighted estimator, and thus we suggest using the TIRW estimator to analyze the real-life dataset.
**Question**
Thank you very much for your suggestions. In the revised version, we have rearranged the formulation of theorems and added some more descriptions of the introduced symbols and obtained results to provide more intuitive explanations of the established theoretical results. Some of the modifications are listed below:
Firstly, in Theorem 1, $\delta_n$ is defined as the smallest positive value $\delta$ satisfying $C\sqrt{\log n} R(\sqrt{\alpha}\delta)\le c_0\delta^2/2$. Similar modifications have been made in other established results, including Theorems 2 and 3. Moreover, some useful discussions on $\delta_n$ have also been added just behind Theorem 1.
Secondly, we have moved some detailed explanations from Appendix C.4 to clarify the transformation from
Eq.(5) to (7) in the main text. Specifically, the kernel complexity function $R(\delta)$ can be well approximated by $C\sqrt{(1/n)d(\delta)\delta^2}$ for the regular kernel class that includes many widely used kernels, such as the kernels with polynomial or exponential decay in their eigenvalues. Then, the link from $R(\delta)$ to $d(\delta)$ can be used to derive Eq. (7). Moreover, the explicit formulation of $\delta^2_n$ corresponding to specific kernels can be obtained by using the bound of $d(\delta)$. For example, $d(\delta)\lesssim \log(1/\delta^2)$ with $0<\delta<1$ for Gaussian kernel and $d(\delta)\lesssim \delta^2$ for the finite rank kernel. Once $\delta^2_n$ is determined, the upper bounds can be immediately obtained by applying theorems with the choice of $\lambda\asymp \delta_n^2$, and then we have $\||\widehat{f}-f^*\||^2_T\lesssim \delta^2_n$ from Eq.(5). | Summary: The authors study the covariate shift setting of nonparametric (kernel) methods
(Regularized Empirical Risk Minimization with optional importance weighing) with
an analysis which includes a wide array of losses and and two conditions on the
importance function. They establish sharp convergence which corroborate other
rates in literature. Additionally they provide experiments showing these rates
in practice.
Strengths: - Quality: The authors extend the results to a wide array of losses and two types of covariate shift problems which is nice.
- Clarity: The paper is well-written and notation makes it easy to follow
Weaknesses: - There are quite a bit of terms which are unknown in practice and they need to
estimate the importance function which limits the practical impact of the
method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the authors clarify the discrepancy between the theorems and practice in terms of the importance function $\phi$? You mention that you use an estimated function as plugin, do you expect it should be possible to prove results when using a plugin instead of the real thing?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Same as in Questions section. No need for societal impact limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and helpful comments on our work! Our point-by-point responses to your comments are given below.
**Weakness**
Thanks a lot for your comments. We admit that the true importance ratio is unknown in practical applications, which needs to be estimated from the unlabeled test data. However, we want to emphasize that in literature, many existing methods [1,2,3] can be directly applied to accurately estimate the importance ratio. As illustrated in our extensive numerical experiments, the numerical performances of the implemented method with the true importance ratio and the estimated importance ratio are negligible in both synthetic and real-life examples. This implies that the importance ratio can be efficiently estimated, which may help the practical implementation of the proposed method.
**Question**
Thank you very much for your comments. We want to clarify that our theoretical results are established with the known true importance ratio. Yet, as you mentioned in the previous comment, the importance ratio is unknown in practical applications. We admit that there exist some gaps between the theory and practice, but we also want to emphasize that such gaps commonly exist in the literature on covariate shift as far as we know [4,5]. We agree that it is very interesting to establish some similar theoretical results by using the plugin estimator and to our best knowledge, such results are still unknown in the literature, possibly due to the existing theoretical obstacles. We decide to leave such a promising topic as potential future work, but add some detailed discussions on the possible route for establishing the theoretical results at the end of the supplemental file in the revised version. Specifically, the key step is that we need to well control the term $\sup_{f \in \Theta(\delta)}|(1/n)\sum_{i=1}^n(\widehat{\phi}_n({\mathbf{x}}_i)-{\phi}_n({\mathbf{x}}_i))(L(y_i, f({\mathbf{x}}_i))-L(y_i, f^*({\mathbf{x}}_i))) |$, and thus the strong convergence rate of $\widehat{\phi}_n-{\phi}_n$ is required. We want to point out that the summands contained in this term are not independent since the estimated importance ratio $\widehat{\phi}$ uses the source data, and some advanced technical tools are needed. Once this term is well bounded, we can establish the results similar to Theorem 3 by using the existing proof technique with slight modification.
**Reference**
[1] Huang, J., Gretton, A., Borgwardt, K., Schölkopf, B., & Smola, A. (2006). Correcting sample selection bias by unlabeled data.
[2] Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., & Schölkopf, B. (2009). Covariate shift by kernel mean matching.
[3] Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P., & Kawanabe, M. (2007b). Direct importance estimation with model selection and its application to covariate shift adaptation.
[4] Tibshirani, R. J., Foygel Barber, R., Candes, E., & Ramdas, A. (2019). Conformal prediction under covariate shift.
[5] Ma, C., Pathak, R., & Wainwright, M. J. (2023). Optimally tackling covariate shift in RKHS-based
nonparametric regression.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I appreciate the response from the authors and especially the detailed comment on how to potentially derive bounds when using the empirical importance ratio. I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you for the feedback
Comment: Thank you for your feedback and all your comments! We appreciate your time and effort in reviewing our work. | Summary: This paper studies the generalization guarantees of non-parameteric methods in RKHS under covariate shift.
Compared to previous work (Ma et al. AOS2023), the authors extend their results from the squared loss to general Lipchitz loss functions.
The derived results show that
- under the uniformly bounded case for the importantce ratio, the unweighted estimator achieves the optimal learning rates in the $L2(d PT)$ space, where $PT$ is the target distribution.
- under the bounded second monment case, the above estimator is sub-optimal.
- under a truncated ratio, a sharp learning rate can be achieved.
Strengths: - generalization analysis under covariate shift is derived from the squared loss to general loss functions
- Under the the uniformly bounded case and bounded second moment case for the importantce ratio, the results can recover the result under the squared loss
- the results are derived under the truncated case
Weaknesses: - Extension from the squared loss to general Lipschitz loss functions is based on Assumption 2. More discussion on this assumption is required for specific loss functions. If the space is limited, the discussion can be deferred to the appendix.
- There are several parts unclear in the proof. For example, in the proof of Lemma C.1.2, the notations $P_n$ and $P$ are undefined in Eq.(2), and more details are needed for the first inequality in Eq. (2).
- The proof idea and structure is almost the same as (Ma et al. 2023). For example, there is no significant difference between the proof of Theorem 3 and Lemma 2 in (Ma et al. 2023). This is because, under Assumption 2 and Eq. (10), the results under Lipschitz loss functions can be well controlled.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why does $\phi_n(x_i) \leq \phi(x_i)$ hold in line 193?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review of our paper and the valuable feedback you provided. We have carefully considered your comments and have made significant efforts to address each of your concerns.
**weakness 1**
Thanks a lot for your precious suggestion. Detailed discussions on Assumption 2 for specific loss functions have been added at the very beginning of the Appendix in the revised version. Precisely, recall that Assumption 2 is a local $c_0$-strongly convexity condition on the expected loss function with respect to $\mathcal{L}^2(\mathcal{X},P^S_x)$ and $\mathcal{L}^2(\mathcal{X},P^T_x)$ at $f^*$. So verifying Assumption 2 is equivalent to verifying the local $c_0$ strongly convexity of the loss function. Here are some examples: for the squared loss $L(y,f(\mathbf{x}))=(y-f(\mathbf{x}))^2$, note that for any $y \in \mathcal{R}$, the function $z \rightarrow (y-z)^2$ is strongly convex with parameter $c_0=1$, so $f \rightarrow L(y,f(\mathbf{x}))$ satisfies the condition in Assumption 2 with $c_0=1$. For the Huber loss ${L}(y, f(\mathbf{x}))=(y-f(\mathbf{x}))^2$, if $|y-f(\mathbf{x})| \leq \delta; \delta|y-f(\mathbf{x})|-\frac{1}{2} \delta^2$, otherwise, since this loss function is locally equivalent to the squared loss, so it is locally strongly convex under mild tail condition on $y-f^*(\mathbf{x})$. For the check loss ${L}(y, f(\mathbf{x}))=(y-f(\mathbf{x}))\left(\tau-I_{\{y\leq f(\mathbf{x})\}}\right)$, the local strong convexity holds if the conditional density of $y-f^*(\mathbf{x})$ given $\mathbf{x}$ is bounded away from $0$ uniformly [1]. For other loss functions such as logistic loss, hinge loss, and so on, mode discussions and verifications are provided in Appendix, and we also refer to pages 470-472 in [2] for the details.
**Weakness 2**
Thanks a lot for your valuable comments on the notation and detail of the poof. We have proofread the technical proof and all the necessary definitions and details have been added in the revised version. Specifically, for the proof of Lemma C.1.2, we have provided the definitions that $P\varphi:=E_{S}[\varphi(\mathbf{x},y)]$ and $P_n \varphi:=(1/n)\sum_i \varphi(\mathbf{x}_i,y_i)$ for the measurable function $\varphi(\mathbf{x},y)$ with $\varphi(\mathbf{x},y)=\phi_n(\mathbf{x})(L(y,f(\mathbf{x}))-L(y,f^*(\mathbf{x})))$ in Eq.(2), and clearly clarified that the expectation $E[\cdot]$ is taking with respect to all random variables contained in it. Moreover, detailed explanations for the first inequality in Eq. (2) have also been added and some reference has also been properly provided, such as Proposition 4.11 in [2]. Precisely, it is derived from the symmetrization technique, which states that for any class of measurable functions $\mathcal{F}$, we have $E[\sup (P_n-P)\varphi]\le 2E[ \sup (1/n) \sum_i \sigma_i \varphi({\mathbf{x}}_i,y_i) ] $, where $\sigma_i$'s are the Rademacher variables. Additionally, it is necessary to stress that the coefficients $g_j$'s of the expansion for $g=f-f^*$ in the proof of Lemma C.2.1 corresponds to orthonormal eigenfunctions of $\mathcal{L}^2(\mathcal{X},P_x^T)$ instead of $\mathcal{L}^2(\mathcal{X},P_x^S)$.
**Weakness 3**
Thanks a lot for your comments. We admit that work in [3] makes tremendous contributions to the theories for the squared loss function under covariate shift and it is true that the Lipschitz property of loss functions and Assumption 2 play a crucial role in establishing our theoretical results. Yet, we want to emphasize that there still exist significant theoretical gaps from extending the squared loss function to the general loss function.
It is well-known that the estimator of KRR with squared loss has an explicit solution. Then, [3] takes advantage of the analytic solution to establish several critical results, including the theoretical bounds similar to Theorem 1 and 2 in our paper. Yet,
when the general loss function is considered, such an explicit solution does not exist, and different theoretical treatments are needed. Specifically, we turn to another proof strategy by noting the relation between $\||f-f^*\||_S$ and $\||f-f^*\||_T$ as shown in Eq. (10), which motivates us to establish Lemma C.2.1 where the constraint set $\Theta^\prime(\nu)$ is associated with the $\mathcal{L}^2$-norm $\||f-f^*\||_S$, which is significantly different from Lemma C.1.2 and the traditional practice by using $\||f-f^*\||_T$. Based on Lemma C.2.1, together with the definitions of $\mathcal{Q}(\delta), \mathcal{G}(\delta)$ and $\mathcal{P}(\delta)$ as in the proofs of Theorem 1 and 2, we successfully establish the convergence rates for unweighted estimators under covariate shift. It is worth pointing out that the convergence rate established in [3] can be regarded as a special case of our theoretical result, which matches the upper bounds in Theorem 1 and 2. Additionally, we also established the sharp bounds in terms of excess risk under various cases, which provides a unified analysis of kernel-based methods under covariate shift.
We want to emphasize that to some extent, this paper also provides a comprehensive study on the numerical performance of the kernel-based methods under various scenarios with covariate shift, which lends further support to the unified analysis and also presents the contributions of this paper.
**Question**
Thanks a lot for your comment. In fact, $\phi_n({\mathbf{x}}_i)$ is the truncation version of $\phi({\mathbf{x}}_i)$ that is $\phi_n(\mathbf{x})=\min\left(\phi(\mathbf{x}),\gamma_n\right)$ as defined in Line 142 of the main text, and thus the inequality $\phi_n({\mathbf{x}}_i)\leq \phi({\mathbf{x}}_i)$ holds directly.
**Reference**
[1] Lian, H. (2022). Distributed learning of conditional quantiles in the reproducing kernel Hilbert space.
[2] Wainwright M J. High-dimensional statistics: A non-asymptotic viewpoint.
[3] Ma, C., Pathak, R., & Wainwright, M. J. (2023). Optimally tackling covariate shift in RKHS-based
nonparametric regression.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. It addressed most of my concerns and I increase my score to 5.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for your feedback and increasing the score! We appreciate your time and effort in reviewing our work. | Rebuttal 1:
Rebuttal: Thank you sincerely for your insightful comments and for dedicating your valuable time and effort toward the thorough evaluation of our paper. We have carefully considered all questions, concerns, and comments raised by the reviewers. The insights and suggestions from reviewers have greatly contributed to enhancing the quality and clarity of our work. And we provided detailed responses to each review separately. We also meticulously incorporated our responses into the revised paper and supplementary materials, mainly covering the following aspects: to highlight the contributions of this paper from the aspects of theory and practical application; to provide more details and insightful understanding about the established results; to correct all the typos, add some more descriptions of the definitions and make clear the introduced symbols. The attached pdf is a revised figure and table (Reviewer qhGh). Once again, we extend our heartfelt gratitude for your time, expertise, and contribution to our work.
Pdf: /pdf/7bdebfd66c79994558bf6028b25c179838d04921.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face | Accept (poster) | Summary: The paper studies an interesting and important question, i.e., how to automate LLMs to call existing models for solving specific tasks. The authors propose a novel framework that contains the following steps: (1) task planning, (2) model selection, (3) task execution, and (4) response generation. The experimental results well support the claim. By leveraging ChatGPT and abundant AI models Hugging Face, the proposed method is able to cover numerous sophisticated AI tasks in different modalities and domains and achieve impressive results in many tasks.
Strengths: + The paper is well-written and easy to follow.
+ A novel idea proposed and well-supported experiments conducted.
Weaknesses: - The method heavily relies on the existence of SOTA LLMs (e.g., ChatGPT), which may involve some practical issues (e.g., unaffordable API costs). It remains under-explored whether open-source LLMs (e.g., vicuna) can be leveraged for the framework (or how to adapt vicuna for better task planning).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For the model selection part, what if the number of models hosted on Huggingface becomes extremely large? How to ensure that the model selects the most appropriate one from them?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns:
***Q1: Whether open-source LLMs can be leveraged for the framework?***
Yes. The open-source LLMs are also suitable for our framework. In our experiments (please see Table 3, 4, 5), we also deploy open-source LLMs for evaluation. Currently, although open-source LLMs are still weaker than ChatGPT and GPT-4, we believe these gaps can be reduced with the development of open-source LLMs, for example, LLaMA-2 [1]. Besides, by collecting high-quality user instruction data, we can fine-tune these open-source LLMs to better utilize these AI models from ML communities for solving AI tasks. Some recent works [2] also indicate the availability of this strategy.
***Q2: What if the number of models hosted on Huggingface becomes extremely large? How to ensure that the model selects the most appropriate one from them?***
Thanks for your question. Generally, it could possibly affect the accuracy of the selected model and bring additional costs due to longer contexts (e.g., model descriptions), when the number of models hosted on Hugging Face becomes extremely large. To handle this problem, in our framework, we currently design a filtering strategy (please see section 3.2) to guarantee the stability of the system: 1) for the current task, we first retain the models whose task type can match it; 2) then we filter out the top-K candidate models based on the number of their downloads; 3) finally, we enable LLMs to select the most suitable model from these candidate models by considering the user request. We will make a more clear description of this part in the final version and continue to investigate this problem in the future.
Besides, our strategy is just one of these solutions and we believe there should have more optimal strategies for this problem, which can be studied in the future. For example: 1) we could construct a hierarchy graph of all models based on the model cards on Hugging Face, to enable better selection; 2) by collecting many user requests which include model(tool)-use, we could optimize public LLMs (e.g., LLaMA-2) to enable it to make a better strategy for model selection. We will also continue to investigate this direction to search for better strategies.
[1] LLaMA 2: Open Foundation and Fine-Tuned Chat Models
[2] ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. | Summary: This paper presents a pipeline to manipulate many autonomous agents (mainly open-sourced models in Hugging Face) . Tother with these models could solve NLP, CV, audio and Video tasks, the resulted HuugingGPT could could complicated multi-modal tasks that might be decomposed a sequence of atomic tasks or a graph.
Strengths: - The phylosophy is interesting and insightful. I like the idea very much.
- This seems be a promising direction to solve multi-modal tasks using HuggingGPT.
Weaknesses: - The whole pipeline seems to provide a series of prompts to solve some combined tasks. The method is not scientific from a traditional point of view.
- The evalaution protocol seems not mature. For example, there is not evidence to check whether the evalution makes sense or not.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - It seesm the authors manually filter top-K models based on the number of downloads. What is the exact number of models in each task? and how do we select the best model among the top-K models? based on which criterion?
- It seems that it uses different evaluation metrics in table 3,4, and 5? Is there any explaination? Can we use some straightforward metrics like task success rate in Table 4 and 5?
- We do not know quality of the GPT-4 anotated data during evaluation. Is it possiable to invite experts to manually check some of them. Don't you think reporting results in human-annotated and GPT-4 anotated data separately could be a better idea.
- Will these evaluation examples and evaluation scripts be open-sourced?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - Evalution is weak
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns:
***Q1: The method is not scientific from a traditional point of view.***
Thanks for your question. The success of our framework benefits from the advent of powerful LLMs (i.e., ChatGPT or GPT-4). Therefore, we adopt a series of prompts as a workflow to implement each step and establish the connection between each stage. By introducing these well-designed prompts, our framework exhibits powerful capabilities in autonomously analyzing user requests and selecting suitable models for solving tasks. This method also verifies the feasibility of using LLMs as the controller to manage AI models. With the advent of more powerful open-source LLMs (e.g., LLaMA-2), we will introduce more advanced learning strategies into our framework.
***Q2: What is the exact number of models in each task? How do we select the best model? Based on which criteria?***
Thanks for your question. Here, we filter out the Top-K models based on the number of downloads (lines 167-168). Due to the limitation of prompt length, we set the number of candidate models for each task as 20 (i.e., the value of K). And for each task, its original number of models before filtering is usually varying (the average number is nearly 60 but some tasks only have few models). After that, we do not set up any fixed criteria but let LLMs automatically select the best model from the Top-k models by providing these model descriptions, which include its downloads, domain, task type and etc. We think that LLMs could select the best suitable models after comprehensively considering these information. We will add these details to our final version.
***Q3: Explanations about different metrics in Table 3, 4, and 5? Can we use some straightforward metrics like task success rate in Table 4 and 5?***
Generally, we think any user request can be converted as a directed acyclic graph (DAG). However, directly evaluating DAG is quite difficult. Therefore, based on the complexity, we divide user requests into three types (i.e., single task, sequential task, and graph task). We first use Recall, Precision, and F1-score as the uniform metrics. But for complex tasks (sequential or graph tasks), only these metrics cannot reflect the planning capability in task dependency. Hence, we additionally use [edit distance](https://en.wikipedia.org/wiki/Edit_distance) to estimate the matching degree in sequential tasks. And for the graph task, since it is very difficult to measure the matching degree of graph data, we currently adopt GPT-4 as a subjective metric to measure its performance.
And thanks for your suggestions. We have conducted human evaluations to better measure the planning capability and final generation of LLMs in our framework. Here, we design passing rate, rationality and success rate as the subjective metrics to evaluate different stages:
- Passing rate: Check LLMs whether can understand instructions and execute in the current stage.
- Rationality: Check whether the generated task sequence / selected tools are rational when compared with user requests.
- Success rate: Check the final results whether satisfy the user request.
We invite three human experts to annotate the provided data based on our designed metrics, and then calculate the average value as the final score. The results of human evaluation are reported in Table 2. These results can further validate the effectiveness of our proposed framework. We will continue to design more reasonable metrics for evaluation and add these results into our final version.
| **LLM** | **Task Planning** | | **Model Selection** | | **Response Generation** |
|-|-|-|-|-|-|
| | **Passing Rate** ↑ | **Rationality** ↑ | **Passing Rate** ↑ | **Rationality** ↑ | **Success Rate**↑ |
| Alpaca-13b | 51.04 | 40.26 | - | - | 6.92 |
| Vicuna-13b | 79.41 | 66.15 | - | - | 15.64 |
| GPT-3.5 | 99.42 | 91.03 | 93.89 | 84.29 | 63.08 |
*Table 2. Human Evaluation. We design passing rate, rationality and success rate as the subjective metrics and invite human experts to evaluate them. (Same as Table 2 in Rebuttal PDF.)*
***Q4: Reporting results in human-annotated and GPT-4 annotated data separately.***
Thanks for your suggestion. Actually, we have separated results into human-annotated and GPT-4 annotated data. Table 3, 4, and 5 reported results on GPT-annotated data (in single, sequential, and graph tasks). We will make a more clear description of this part in the final version. Furthermore, we also invite human experts to conduct human evaluations for our experiments to further verify our results. The results are shown in Table 2 (please see Rebuttal PDF).
***Q5: Will these evaluation examples and evaluation scripts be open-sourced?***
Yes. Currently, we are inviting the labeling team to conduct human evaluation and help us to iteratively clean up the whole dataset. We will release all evaluation examples and the evaluation scripts in the final version. | Summary: The authors propose HuggingGPT , a collaborative system for solving AI tasks, which is composed of a large language model (LLM) and numerous expert models from ML communities. They provide methods for each of the four stages involved in HuggingGPT's workflow: task planning, model selection, task execution, and response generation.
Strengths: Impactful and well written paper
- simple strategy for handing resource dependencies for executing tasks
- compelling idea of chaining of expert models to provide a tool to decompose a task into sub-tasks and identify the appropriate expert models to solve these sub-tasks
- reasonable format for inputting task request along with examples from the user
- robust evaluation - human evaluation along with automated evaluation
Weaknesses: I don't see any weaknesses in the experiments, evaluation or novelty of this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: No questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the authors list the limitations of their work as a separate section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your positive comments. We will continue to refine our paper and devote more effort to the subsequent works to facilitate the community to better understand and explore this new research direction. | Summary: This paper considers large language models (LLMs) like ChatGPT as a controller and presents a new framework called HuggingGPT, which connects various AI models in the existing ML community (i.e., HuggingFace). Specifically, HuggingGPT consists of four steps including task planning, model selection, task execution, and response generation. By leveraging the strong capability of LLMs and numerous AI models in different modalities, HuggingGPT can solve sophisticated AI tasks and achieve promising results.
Strengths: 1) The idea to connect LLMs with rapidly developing ML communities like HuggingFace is novel. It largely extends the applicability of LLMs to solve multi-modality AI tasks by fully utilizing the existing powerful models in HuggingFace.
2) Each step of HuggingGPT is well designed from Figure 2. The whole paradigm of HuggingGPT is neat and effective.
3) This paper is well written and easy to follow.
Weaknesses: 1. In Section 3.1, the demonstration examples may have an important impact on the parsing performance. The authors should provide more details such as the number of demonstration examples and the method to select these examples. The demonstration case provided in Table 1 is somewhat confusing for me because it only involves the tasks about images and texts. Can these demonstration examples benefit the parsing of tasks in other modalities like audio? I also wonder whether the demonstration examples for each user request are the same.
2. In Section 3.2, the authors propose a model selection strategy based on in-context alignments and the number of downloads. But in my view, the contribution of this module is questionable. Since the performance gap between different models for each task may be significantly large due to model scales (e.g., GPT-2 and LLaMA for text generation), it’s nearly impossible to select weaker models to dealing with the corresponding task. Thus, I’m curious about the performance if we directly use the best model for each task. There are also no empirical results to show the necessity of model selection.
3. The experimental result is somewhat weak in terms of the following points:
(1) The authors only conduct the empirical analysis on task planning. Other modules should be also tested individually.
(2) The authors only use automatic evaluation metrics to measure the model performance. However, even GPT-4 score may have potential biases in the evaluation of generated texts. It’s better to involve human evaluation to make experimental results stronger.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I have included my questions in the weaknesses part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns:
***Q1: More details about demonstration examples.***
The number of demonstration examples is set as 3 in our default settings. Here, we select demonstration examples that contain more tasks and complex task dependencies, and simultaneously use fewer tokens in prompts. The target of using these demonstration examples is to expect LLMs to understand how to parse the task queue. Therefore, even if these examples only involve text and image tasks, our framework will still make correct predictions on audio tasks. And currently, the demonstration examples are set as the same for the user.
Furthermore, we also conduct different experiments to investigate the effects of demonstration examples, from the perspectives of variance and the number of demonstrations. Here, we deploy different numbers of task types involved in the demonstrations to measure the variance in demonstration examples. The results can be found in Figure 1 (please see Rebuttal PDF) and Table 1. We found that increasing variance or the number of demonstrations can both improve the planning capability of LLMs. In the future, we will continue to investigate more elements to explore their effects on our framework.
| **Demo Variance** | **LLM** | **Single Task** | | **Sequential Task** | | **Graph Task** | |
|:------|:----|:-----|:---------|:-----|:--------:|----|-----|
| | | **Acc** ↑ | **F1** ↑ | **ED (%)** ↓ | **F1** ↑ | **F1** ↑ |
| 2 | GPT-3.5 | 43.31 | 48.29 | 71.27 | 32.15 | 43.42 |
| 2 | GPT-4 | 65.59 | 67.08 | 47.17 | 55.13 | 53.96 |
| 6 | GPT-3.5 | 51.31 | 51.81 | 60.81 | 43.19 | 58.51 |
| 6 | GPT-4 | 66.83 | 68.14 | 42.20 | 58.18 | 64.34 |
| 10 | GPT-3.5 | 52.83 | 53.70 | 56.52 | 47.03 | 64.24 |
| 10 | GPT-4 | 67.52 | 71.05 | 39.32 | 60.80 | 66.90 |
*Table 1. Evaluation of task planning in terms of the diversity of demonstrations. (Same as Table 1 in Rebuttal PDF.)*
***Q2: Can we directly use the best model for each task? The necessity of model selection?***
Yes. The motivation of model selection is to answer which model is suitable for our task, and expect us to determine a selection strategy. Here, directly selecting the best model can be considered as an available strategy. However, "the best model" is difficult to define, since it may involve too many elements. For example, which datasets, metrics and domains to measure the best model. And when involving some domain-specific requests (as shown in Table 3 of Rebuttal PDF), e.g., creating anime-style images, generating protein sequences, etc., it is difficult to cope with such scenarios by directly selecting the model with the best performance.
Therefore, we expect to fully utilize the provided information (e.g., model structure, training data, applicable domain, performance, download, and etc) in the model card to more comprehensive support for model selection. Besides, as mentioned in our comments (Q1) to all reviewers, keeping model selection individually can also enable better efficiency, flexibility, and extensibility. In the future, we will continue to explore better strategies for model selection.
***Q3: Other modules should also be tested individually.***
Thank you for your suggestions. In the initial version, we mainly evaluate our method on task planning since this module is very important to determine the subsequent results. Following your suggestions, we have added more experiments to evaluate demonstrations (please see Q1), and human evaluation on model selection and response generation (please see Q4). We found that the capability of LLMs could also affect the results of model selection, so how to optimize model selection is also very important. In the future, we will conduct more experiments to investigate more elements in our framework.
***Q4: Involve human evaluation.***
Thanks for your suggestions. We further collected 130 diversity requests and have conducted human evaluations to better measure the planning capability and final generation of LLMs in our framework. Here, we design passing rate, rationality and success rate to evaluate different stages:
- Passing rate: Check LLMs whether can understand instructions and execute in the current stage.
- Rationality: Check whether the generated task sequence / selected tools are rational when compared with user requests.
- Success rate: Check the final results whether satisfy the user request.
We invite three human experts to annotate the provided data based on our designed metrics, and then calculate the average value as the final score. The results of human evaluation are reported in Table 2 (please see Rebuttal PDF). These results can further validate the effectiveness of our proposed framework. We will continue to design more reasonable metrics for evaluation and add these results into our final version.
| **LLM** | **Task Planning** | | **Model Selection** | | **Response Generation** |
|-|-|-|-|-|-|
| | **Passing Rate** ↑ | **Rationality** ↑ | **Passing Rate** ↑ | **Rationality** ↑ | **Success Rate**↑ |
| Alpaca-13b | 51.04 | 40.26 | - | - | 6.92 |
| Vicuna-13b | 79.41 | 66.15 | - | - | 15.64 |
| GPT-3.5 | 99.42 | 91.03 | 93.89 | 84.29 | 63.08 |
*Table 2. Human Evaluation. We design passing rate, rationality and success rate as the subjective metrics and invite human experts to evaluate them. (Same as Table 2 in Rebuttal PDF.)*
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for your rebuttal. The additional experimental results in Q1 and Q2 are meaningful for me. As for Q4, I expect that more details about human evaluation should be added to the final version of the paper, such as the modality / task distribution of 130 requests, the human annotation instructions, and the agreement among 3 human experts.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response
Comment: Dear Reviewer,
Thank you for taking the time to review our rebuttal and for providing constructive feedback. We are pleased to hear that the additional experimental results in Q1 and Q2 have addressed your concerns. Based on your comments, we provide more details regarding the human evaluation and will add them to our final version. Please see as follows:
**Modality and Task Distribution**
To conduct a more comprehensive evaluation, we collected a dataset for human evaluation that covers as many modalities and tasks as possible. Below are the statistics on modality and task distribution.
The statistics for modality distribution are as follows:
|Modality| Count |
|-|-|
|Text | 46 |
|Text + Image |58|
|Text + Video |7|
|Text + Audio |1|
|Text + Image + Video |3|
|Text + Image + Audio |2|
|Text + Video + Audio |8|
|Text + Image + Audio + Video|5|
We also conducted a statistical analysis of the detailed task types in the task graph. The statistics are as follows:
|Task| Count |
|-|-|
|image-to-text|52 |
|text-to-image|38|
|visual-question-answering|30|
|text-to-video|23|
|conversational|20|
|text-to-speech|20|
|object-detection|13|
|text2text-generation |13|
|image-to-image|9|
|question-answering |6|
|text-generation|4|
|audio-to-audio|4|
|image-classification| 3|
|openpose-control |3|
|openpose-text-to-image| 3|
|canny-control|2|
|canny-text-to-image|2|
|automatic-speech-recognition|2|
|image-segmentation|1|
|seg-text-to-image |1|
|translation|1|
|audio-classification|1|
|token-classification|1|
|mlsd-control| 1|
|mlsd-text-to-image|1|
|summarization|1|
Based on the modality and task distribution, we observe that our collected dataset for human evaluation covers text, image, audio, and video modalities as well as 27 tasks. These distributions suggest that our dataset for human evaluation is capable of good generalization, and can reflect the ability of HuggingGPT when processing real user requests.
**Human Annotation Instructions**
We invited three human experts to evaluate the outputs of the large language models, and formulated evaluation guidelines for the task planning, model selection, and response generation stages.
In the task planning and model selection stages, we asked the experts to evaluate the passing rate and rationality of the outputs of the large language models:
+ Task planning or model selection is passed when and only when the task graph or the selected model is executable. We define a task graph as executable in three ways: the output is parsable, the argument types of the tasks are correct, and the interconnected tasks match in terms of resource dependencies. For model selection to be considered executable, the output must be parsable and the chosen model must be valid.
+ The rationality of task planning or model selection is a stricter metric that requires validation of correctness beyond just execution passes. Task planning is considered rational if the planned task graph is executable and has the potential to address the user's request. Likewise, model selection is considered rational if the chosen model is the most suitable among the candidate models to meet the user intent and can effectively complete the planned task.
In the response generation stage, we asked human experts to evaluate whether the responses generated by HuggingGPT can eventually solve the user requests. Besides, if some media resources are generated based on user requests, we also require the experts to determine whether the generated media content satisfies the user requirements.
Moreover, to make a fair evaluation, we anonymized the names of the large language models to prevent any biases based on prior experience.
**Agreement among Human Experts**
Thank you for your suggestion. To measure the agreement among 3 human experts in their evaluations, we designed three metrics which are as follows:
+ **Percent Agreement**: This represents the percentage of items where all raters agree.
+ **Average Pairwise Agreement**: This metric computes the agreement between pairs of raters and averages them. For three annotators, we assess agreement for pairs (1, 2), (1, 3), and (2, 3) and average the results.
+ **Fleiss' Kappa**: This measures the reliability of agreement among raters for categorical ratings. It extends Cohen's Kappa to multiple raters. Values range from -1 to 1: closer to 1 means stronger agreement, 0 indicates random agreement, and negative values indicate less than chance agreement.
|LLM|Percent Agreement|Average Pairwise Agreement| Fleiss' Kappa|
|:-|:-:|:-:|:-:|
|Alpaca-13b|82.31%|88.21% |0.78|
|Vicuna-13b|81.53% |87.69%|0.83|
|GPT-3.5|77.69%|85.13%|0.69|
We observe a high level of consistency in the human evaluations across the three models. Their evaluations align closely with each other, leading to moderate to excellent agreement metrics. This suggests that the evaluations are reliable. All of these details will be included in our final version. | Rebuttal 1:
Rebuttal: # To All Reviewers
We sincerely thank each reviewer for providing constructive comments for our paper, which are very helpful to improve our paper. Below are our responses to some general issues:
***Q1: Model Selection***
Thanks for the comments of each reviewer. Here, we will provide more details and descriptions to better explain the design of model selection and its correlation with task planning. We think it is necessary to keep the model selection as an individual step and give these reasons:
1. **Motivation**: First, task planning helps us to analyze user request and decompose it into a task sequence, while model selection is to answer which model is suitable for this task. Therefore, the target of each stage is different, and we prefer to keep two steps respectively and thus encourage to design different strategies for each.
2. **Efficiency**: Sometimes, the number of models from ML communities could be massive. Since we need to use model descriptions to call the most suitable model, selecting models will bring more token consumption in prompts. As a result, combining task planning and model selection will cost more tokens within one query, and separating these two steps will be more friendly for LLM computations.
3. **Flexibility and Extensibility**: Furthermore, keeping task planning and model selection as two parts can provide better flexibility and extensibility: 1) the model library is usually dynamic since developers will continually add the latest models into the community. Therefore, we expect that the model selection is independent of task planning so that it is easy for us to update the model library; 2) some models could include more detailed granularity (e.g., specific domains). These settings necessitate the development of more flexible selection strategies.
We will add these descriptions to the final version to better explain the design of our framework.
***Q2: Experiments & Evaluation***
Thanks for the suggestions provided by each reviewer. We have added these results to better explain the settings in our framework.
+ Table 1 and Figure 1 in Rebuttal PDF show the effects of variance and number of demonstrations.
+ Table 2 in Rebuttal PDF demonstrates the evaluation of our framework by human experts.
+ Table 3 in Rebuttal PDF presents some cases illustrating the model selection.
Specifically, for human evaluation, we collected 130 diversity requests to measure the planning capability and final generation of LLMs in our framework. Here, we design passing rate, rationality and success rate to evaluate different stages:
- Passing rate: Check LLMs whether can understand instructions and execute in the current stage.
- Rationality: Check whether the generated task sequence / selected tools are rational when compared with user requests.
- Success rate: Check the final results whether satisfy the user request.
We invite three human experts to annotate the provided data based on our designed metrics, and then calculate the average value as the final score.
Please refer to the attached Rebuttal PDF for the detailed results.
| **Demo Variance** | **LLM** | **Single Task** | | **Sequential Task** | | **Graph Task** | |
|:---------------------------------------|:---------|:-------------|:---------|:-----------------|:--------:|-----------|-----|
| | | **Acc** ↑ | **F1** ↑ | **ED (%)** ↓ | **F1** ↑ | **F1** ↑ |
| 2 | GPT-3.5 | 43.31 | 48.29 | 71.27 | 32.15 | 43.42 |
| 2 | GPT-4 | 65.59 | 67.08 | 47.17 | 55.13 | 53.96 |
| 6 | GPT-3.5 | 51.31 | 51.81 | 60.81 | 43.19 | 58.51 |
| 6 | GPT-4 | 66.83 | 68.14 | 42.20 | 58.18 | 64.34 |
| 10 | GPT-3.5 | 52.83 | 53.70 | 56.52 | 47.03 | 64.24 |
| 10 | GPT-4 | 67.52 | 71.05 | 39.32 | 60.80 | 66.90 |
*Table 1. Evaluation of task planning in terms of the diversity of demonstrations. We denote demonstration variance as the number of different task types involved
in the demonstrations. (Same as Table 1 in Rebuttal PDF.)*
| **LLM** | **Task Planning** | | **Model Selection** | | **Response Generation** |
|--------------|-------------------|--------------|---------------------|--------|---------------------------|
| | **Passing Rate** ↑ | **Rationality** ↑ | **Passing Rate** ↑ | **Rationality** ↑ | **Success Rate**↑ |
| Alpaca-13b | 51.04 | 40.26 | - | - | 6.92 |
| Vicuna-13b | 79.41 | 66.15 | - | - | 15.64 |
| GPT-3.5 | 99.42 | 91.03 | 93.89 | 84.29 | 63.08 |
*Table 2. Human Evaluation. We design passing rate, rationality and success rate as the subjective metrics and invite human experts to evaluate them. (Same as Table 2 in Rebuttal PDF.)*
Pdf: /pdf/f4017d03c30c58ee05bbd3403e9c98c86d0f28a3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a framework that uses LLM as controller over modularized and specialized task models to plan and execute a complex task. The approach is to prompt LLM to decompose a given task command into a execution DAG, and for each step, parse model specifications (as metadata expressed in HuggingFace model cards) and select according modularized models for execution, and finally summarize a response to give it to the user. The idea is on the line with recent works on LLM-based planning and tool using.
Strengths: I think the idea is rather novel. It aligns with recent works on using LLM as a central component to query more specialized models to complete a complex task. This paper proposes to exploit the vastly available models hosted on Hugging Face in a combinatorial way. If things work out well, it could have a substantial application impact.
Weaknesses: The biggest selling point of this paper, as is repetitively mentioned in the paper itself, is the planning. But compared to recent works, the planning strategy in this paper is actually rather simple. Existing works on planning often involves an iterative process where LLM plans, executes, and observes, and improves (for instance, Reflexion, AdaPlanner, Self-Refine, etc). In this paper, it's just plan and execute. So, on the planning part, I do not see any contribution. Maybe, compared to the planning, a bigger contribution of this paper is the task decomposition.
Experiments are very limited.
- Data scale is quite small (46 trace annotations).
- No planning baseline, no ablation study, and no insight about model interplay.
- Comparisons are all on different LLMs; this has little to do with the claimed contribution on planning.
A simple baseline can be directly using specialized models. I think even if in some cases the proposed HuggingGPT does not outperform, it still gives reader a good picture about the pros and cons. An immediate ablation study I can think of is why not merge the model selection and task planning into one step, or what is the impact of model selection, especially given the large number of models on Hugging Face. I see no clue in this paper.
It is not clear how the proposed approach rely on the few-shot demonstration. Prior works on planning mostly rely on few-shot in-context prompting. It seems this paper is also on this technical line. But such dependency is also a limitation in the general use case. It's also no clear about the variance of the few-shot prompt used, e.g., whether the few-shot examples are fixed, and what about their diversity.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please see my general comments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see my general comments.
--------------
**This section summarizes my concern based on rebuttal.**
There is a very simple baseline this paper needs to compare with. That is a simple one-pass reasoning by LLM without any implicit or explicit planning or model selection. The author so far refuses to do such ablation, therefore I am not convinced that the approach is effective. The proposed framework is indeed interesting, but an interesting thought remains a thought without proven effectiveness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns:
***Q1: The biggest selling point of this paper is planning but the planning strategy in this paper is actually rather simple.***
Thanks for your question. Below are our answers:
- First, we want to highlight that while planning is a crucial stage in our framework, the biggest contribution of our paper is to present how to utilize LLMs to coordinate multiple AI models from ML communities (e.g., Hugging Face) for solving AI tasks. And to the best of our knowledge, this is the first work to build connections between LLMs and AI models from ML communities.
- Second, existing works (e.g., Reflexion, AdaPlanner, Self-Refine) for planning are mainly designed for solving language tasks (e.g., reasoning, coding, and etc). But the planning in our framework is to use LLMs for analyzing user requests and thus connecting external models for solving AI tasks. To fulfill this target, we introduce task decomposition with well-designed prompts plus demonstrations to instruct LLMs to conduct task planning. Therefore, essentially, our planning strategy is well designed and not simple, and the goal of our planning is quite different from existing planning works.
***Q2: Experiments are limited.***
Thanks for your question. Below are our answers:
- **Data scale is quite small**: Our dataset is composed of GPT-annotated (3,497 cases) and human-annotated (46 cases) data (please see Appendix A.2). As aforementioned, our work is the first attempt to connect LLMs with the ML community. As a result, there are no existing datasets for reference and we need to construct datasets by ourselves, which is very difficulty. Until now, only a few works (e.g., Gollria [1], ToolLLM [2]) released some datasets, but all of them are constructed by LLMs. However, we will try our best to build more high-quality data to further facilitate the investigation of this direction.
- **No Planning baseline and ablation study**: As aforementioned, the motivation of our paper is to present how to connect various AI models by using LLMs, not just for designing a planning strategy. And most of the existing planning works are for solving language tasks, while ours is to use LLMs to conduct task planning and thus invoke other models for solving tasks. We also give a discussion about planning in Appendix B. Besides, we have added more ablation studies (Please see Rebuttal PDF).
- **Comparisons are on different LLMs, not for planning**: The experiments on different LLMs are used to highlight the importance of planning in our framework, and planning could also be a critical metric to reflect the capability of LLMs. Besides, we also conduct a discussion (Appendix B) with other works (e.g., AutoGPT) in different dimensions, including planning.
Besides, we also add more experiments, ablation studies, and human evaluations about other modules (Please see Rebuttal PDF).
[1] Gorilla: large language model connected with massive apis
[2] ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs
***Q3: Why not merge the model selection and task planning into one step?***
Thanks for your question. We think it is necessary to keep the model selection as an individual step and give these reasons:
1. **Motivation**: First, task planning helps us to analyze user request and decompose it into a task sequence, while model selection is to answer which model is suitable for this task. As a result, the target of each stage is different and both of them play very important roles in our framework.
2. **Efficiency**: Sometimes, the number of models from ML communities could be massive. Since we need to use model descriptions to call the most suitable model, selecting models will bring more token consumption in prompts. As a result, combining task planning and model selection will cost more tokens within one query, and separating these two steps will be more friendly for LLM computations.
3. **Flexibility and Extensibility**: Furthermore, keeping task planning and model selection as two parts can provide better flexibility and extensibility: 1) the model library is usually dynamic since developers will continually add the latest models into the community. Therefore, we expect that the model selection is independent of task planning so that it is easy for us to update the model library; 2) some models could include more detailed granularity (e.g., specific domains). These settings necessitate the development of more flexible selection strategies.
Therefore, we prefer to keep model selection and task planning individually to help us to develop each stage. We will also add these descriptions to our final version to better explain the design of our framework.
***Q4: How does the proposed method rely on the few-shot demonstration? And the variance of the few-shot prompt used?***
Thanks for your question. We utilize the few-shot demonstration to instruct LLMs to understand user intent, task dependency, and how to parse task sequence. By introducing few-shot demonstrations, LLMs is able to produce better results by following our instructions. To further investigate the effect, we respectively conduct experiments to investigate the effect of the number of demonstration examples (0-5). The results are reported in Figure 1 (Please see Rebuttal PDF). We can find that the planning capabilities of LLMs can obtain significant improvements by introducing few-shot demonstrations.
Besides, we also conduct experiments to investigate the variance of demonstration examples. Here, we test different numbers of task types involved in the demonstrations to measure the variance. The results can be found in Table 1 (please see Rebuttal PDF). We observe that increasing the variance of demonstration could also improve the planning capability of LLMs.
---
Rebuttal Comment 1.1:
Title: Added materials are great bonus to the paper but still not to the point.
Comment: Thank you for the discussion and making the experiment results more complete. I agree with the design considerations.
However, my biggest concern is still on the experiment part, especially on the planning. The paper and rebuttal material only offer verbal content about why planning could be useful. But I think the most important part is to show that **if it is indeed useful**. One very simple and easy-to-come-with experiment is comparing against solving some sequential/graph language-only tasks without any planning. Just throw these tasks to GPT-3.5 and compare the output with your model with planning.
Unless I entirely missed something on this ablation in the paper, I lean on rejection. The idea is good, but it lacks evidence to support it.
----
Some minor points: there is also a lack of references to prior works on task decomposition which has been studied extensively in the field. The application scenario is different, but the concept remains the same.
---
Reply to Comment 1.1.1:
Title: ( 1/2 ) Response to Comments
Comment: Thank you for taking the time to review our rebuttal. While we respectfully disagree with your opinions on our experimental part, we try our best to answer your question and address your concerns:
+ First, as mentioned in our previous comments, HuggingGPT is an integrated framework, which includes task planning, model selection, task execution, and response generation. Directly removing any stage in HuggingGPT would undermine its integrity and make it collapse. And benefiting from such a framework, HuggingGPT is able to formulate such a concept: connecting LLMs with AI models to autonomously address complex AI tasks from any modalities and any domains. That is the biggest contribution. And the experiments in our paper are used to highlight the importance of planning capability in such a framework, not to discuss how to formulate a better planning strategy. In our limitations (Please see lines 301-304), we also mention that we need to explore how to improve LLM capability in task planning in the future.
+ Secondly, the reviewer argues how to prove planning is indeed useful. As aforementioned, HuggingGPT is an integrated framework, and removing any step will make it collapse. For example, task planning should not only consider parsing the task sequence from user requests, but also guarantee the parsed task sequence can be understood by the model selection stage. Therefore, the target of our paper is to verify whether HuggingGPT can succeed. As a result, all experiments in our paper indicate that our framework can achieve promising results and the design of such a framework is **indeed useful** to connect LLMs with AI models from the ML community. Please note that this is the first successful attempt to validate the availability of this problem. As a result, these conclusions prove that each stage in HuggingGPT is critical to fulfilling the final target. And then, we conduct experiments to further reveal which stage in HuggingGPT is more important and can be improved, but not to discuss which planning strategy is optimal. We expect the reviewer can understand this point. | null | null | null | null | null | null |
What Can We Learn from Unlearnable Datasets? | Accept (poster) | Summary: This paper comprehensively evaluated existing unlearnable examples and showed a surprising result. Unlearnable examples aim to prevent the model from learning useful features from the data. However, results show that several methods that networks actually can learn useful features. This is revealed by applying an existing feature reweighting method. Such results indicate that "privacy" might not be preserved. This paper also demonstrated that the recent findings of linear separation in unlearnable examples are not a necessary condition and an orthogonal projection attack can break class-wise linear separable unlearnable noise.
Strengths: - By using an existing method of Deep Feature Reweighting (DFR), the finding is very interesting and new in this field. This paper has comprehensively evaluated the existing different methods for generating unlearnable examples. It is surprising to see model can "actually" learn useful features on several unlearnable methods. Interestingly, the error-minimizing does not. Although the exact cause remains unknown, such findings provide valuable insights for future works. Additionally, this also indicates that future works should consider DFR as a reasonable evaluation method for unlearnable examples.
- Based on recent work, Yu et al. [34] show that unlearnable examples rely on linearly separable noise. This paper extended the analysis to a wide range of different generation methods. Results show that although unlearnable examples are commonly induced linear separable noise, one particular method, Autoregressive perturbations, does not fall into this category. This indicates that linear separable noise is not a necessary condition. These results provide insights for future works that there could be more effective non-linear separable unlearnable noise.
- The presentations in this work are very good. It covered comprehensive related works, motivations, challenges and limitations in unlearnable examples.
Weaknesses: My main concern is the Orthogonal Projection attack. Although it is technically sound, it is unclear what is the main contribution or the insight of this proposed method.
- Class-wise noise is known to be easily detected in unlearnable examples. Considering an additive noise, Yu et al. [34] show averaging across classes can easily expose the class-wise noise. One trivial solution that might consider is to subtract the noise for each class. Does an Orthogonal Projection attack more effective compared to this trivial solution? For Orthogonal Projection attacks, does it have any effect on the sample-wise setting?
- Compared to adversarial training (AT), the main benefit is efficiency. However, AT has other desired properties, such as adversarial robustness and learning robust features. As for effectiveness, it seems to be no significant benefit of the Orthogonal Projection attack. For OPS or OPS+EM, the constraint is a single pixel, while AT evaluated in this paper is $L_\infty$, changing AT constraint might be effective against the OPS-based method.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See the weaknesses section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review, for mentioning that our findings are “very interesting and new in this field,” and for writing that our paper provides “valuable insights for future works.”
> Class-wise noise is known to be easily detected…Yu et al. [34] show averaging across classes can easily expose the class-wise noise. One trivial solution that might consider is to subtract the noise for each class. Does an Orthogonal Projection attack more effective compared to this trivial solution?
We did perform experiments averaging images of a given class to recover class-wise perturbations, but simply subtracting this class image from each image doesn’t remove the poisoning effect. Additionally, because we do not know the true class at inference time, we cannot subtract the class image (and so there is also a distribution mismatch between train/test). This trivial method of subtracting average class images is compared to Orthogonal Projection below:
| Poison | Class-Avg Subtract | Orthogonal Projection |
|--------|--------------------|-----------|
| LSP | 13.05 | 87.99 |
| OPS | 12.62 | 87.94 |
Here, we report CIFAR-10 test accuracy for LSP and OPS poisons. We will be sure to add these results to the Appendix for completeness. Also, we revisited Yu et al. [34] but did not find their experiment where they average images of a class. Please let us know if there is a different work you were referring to, and we would be happy to comment.
> For Orthogonal Projection attacks, does it have any effect on the sample-wise setting?
Orthogonal Projection sometimes works on sample-wise perturbations (for example, NTGA and Unlearnable Examples in Table 3), but is not meant for this case since perturbations can vary greatly within a class. This can make it difficult to optimize a representative set of vectors (columns of W in Alg. 1 Line 3) which are predictive of the label.
> Compared to adversarial training (AT), the main benefit is efficiency…AT has other desired properties, such as adversarial robustness and learning robust features.
We agree there are desirable properties of adversarial training (AT). For learning from unlearnable data, however, the goal is only to improve clean test performance – to somehow learn from corrupted data and generalize to clean data. Aside from being ~5x faster than AT, Orthogonal Projection is also more interpretable: we can visualize the most predictive features (Figure 3 (b)) and choose whether to remove these features. On the other hand, it is difficult to understand which features AT is ignoring.
Additionally, the OPS poison is a case where AT will fail due to the unbounded perturbation, but Orthogonal Projection can achieve 87.94% test accuracy, which is higher than 14.41% of AT, and higher than 85.16% of ISS [37].
[37] Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression, ICML 2023
> changing AT constraint might be effective against the OPS-based method
The authors of OPS [33] indeed conduct an experiment where they perform $L_2$ AT as opposed to $L_{\infty}$ and find that the poisoning effect can be mitigated: Using $\epsilon=2$ $L_2$ AT yields a model with 73.7% test accuracy. But this is at the cost of clean test accuracy which degrades from 94% to 73.45%. Orthogonal Projection is more effective as it does not degrade clean accuracy much (90% test accuracy on clean train data), and achieves high test accuracy (87.94%) on OPS poison.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: Thanks for the clarification. My concerns have been addressed. I very much appreciate this paper. | Summary: This paper suggests that DNNs can learn useful features from unlearnable datasets and provides a counterexample, demonstrating that linear separability of perturbations is not a necessary condition. They propose the Orthogonal Projection method to recover unlearnable datasets.
Strengths: 1. A new method called Orthogonal Projection is proposed to recover unlearnable datasets. On class-wise unlearnable datasets, this method achieves similar (or even betters) performance compared to adversarial training. Moreover, it is much faster than adversarial training.
2. This work suggests the risk of using class-wise perturbations to craft unlearnable datasets.
Weaknesses: 1. For the claim "DNNs can learn useful features from unlearnable datasets", your experimental evidence is not sufficient and the logic is not very solid. See Question 1.
2. Though the intuition of Orthogonal Projection is clear, the purpose of its mathematical form is still ambiguous. Especially it cannot explain why this method is good at treating class-wise perturbations rather than sample-wise ones. See Question 2.
3. Sections 4.2, 4.3 and 4.4 have weak connections with each other, appearing as if they are independent from one another. The results in sections 4.2 and 4.3 are not interesting enough.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. About weakness 1:
1.1 In Figure 2, the performance of DFR on different unlearnable datasets is not consistent. At the last epoch, DFR in (b, d) is even worse than randomly initialized baseline. For certain datsets, such as Random noise and AR, the peak in early epochs is a property of these datsets, not your DFR method.
1.2 I don't think the increase of test accuracy caused by DFR can support you claim because you retrain the linear classifier using clean data. Don't forget that your DFR method improve the accuracy of randomly initialized baseline from $10\%$ to $35.79\%$. However, you can not conclude that a randomly initialized feature extractor learns useful features.
2. About weakness 2:
2.1 In the algorithm of Orthogonal Projection, the recovered data $X_r$ is ensured that $W^T X_r=0$. What does this equation mean? How does it work for recovery?
2.2 What is the reason that Orthogonal Projection is not good at recovering sample-wise perturbations?
2.3 It is mentioned in Line 262 that perturbed images of sample-wise Unlearnable Examples and Adversarial Poisoning are not linearly separable though perturbation themselves are linearly separable. Does this illustration contradict to the claim in Line 216 "learning the most predictive features in an unlearnable dataset can amount to learning the perturbation itself"?
2.4 Please give experimental evidence of linear (in)separability of images perturbed by those class-wise and sample-wise attacks.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: As mentioned in the paper, orthogonally projection removes K dimensions from the data manifold, where K is the number of classes in a dataset. While this may not be a problem for high-resolution images with tens of thousands of dimensions, this detail could impact applicability for low-resolution datasets.
Also see the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough feedback and for recognizing that our work “suggests the risk of using class-wise perturbations.” That was one of our goals in this work.
> About weakness 1: 1.1 In Figure 2, the performance of DFR on different unlearnable datasets is not consistent. At the last epoch, DFR in (b, d) is even worse than randomly initialized baseline. For certain datasets, such as Random noise and AR, the peak in early epochs is a property of these datasets, not your DFR method.
This behavior is due to the different poison generation methods. More specifically, adversarial poisoning perturbs images with error-maximizing noise (and this noise has been shown to inject actual features models use during classification [36]). In this case, DFR reweights useful existing features for classification. On the other hand, Random Noise and AR poisons do not perturb images with image features; instead, both perturb with synthetic noise. In these cases, useful features are still learned during poison training, but only in the first epochs of training. In Figure 2, plots (a) and (d), we find that checkpoint features progressively get better despite test accuracy of the model remaining low. In plots (b) and (c), we find that checkpoint features are better around the test accuracy peak. In both cases, features learned during poison training can be reweighted for high test accuracy – calling into question the ability for any of these unlearnable dataset methods to safeguard data.
> 2.1…What does this equation mean? How does it work for recovery? 2.2 What is the reason that Orthogonal Projection is not good at recovering sample-wise perturbations?
Assume CIFAR-10 images of shape (3,32,32). Each column i of W (optimized in Alg. 1, Lines 1-4) is a 3072-dim vector that represents the most predictive image feature for class i. This step serves as recovery of the perturbation. After the QR decomposition of W, Q consists of orthonormal columns that form a basis for the column space of W. When we say Orthogonal Projection “ensures that the dot product of a row of X with every column of Q is zero,” (i.e., $X_r \cdot Q = 0$) this means that every recovered image vector does not contain any linearly separable component (i.e., does not contain any column of Q as a component). Alg. 1, Line 6 ensures image vectors and columns of Q are orthogonal and so the dot product is 0. The “recovered” data thus has 10 dimensions (approximations of the 10 perturbations) removed. We have now updated Sec. 4.1.1 to be more clear on these steps, and we will include these updates in our camera ready version.
Orthogonal Projection sometimes works on sample-wise perturbations, but is not meant for this case since perturbations can vary greatly within a class. This can make it difficult to optimize a representative set of vectors (W) which are predictive of the label.
> 2.3 It is mentioned in Line 262 that perturbed images of sample-wise Unlearnable Examples and Adversarial Poisoning are not linearly separable though perturbation themselves are linearly separable. Does this illustration contradict to the claim in Line 216 "learning the most predictive features in an unlearnable dataset can amount to learning the perturbation itself"?
We don’t claim that learning the most predictive features will “amount to learning the perturbation itself”, only that it can amount to that. More specifically, the many poisons we consider in our work all behave differently, having been generated or optimized with a variety of objectives. Despite this, we find that for class-wise noises, a logistic regression classifier can learn the most predictive features and thereby isolate the perturbation itself (Figure 3).
> 2.4 Please give experimental evidence of linear (in)separability of images perturbed by those class-wise and sample-wise attacks
In the paper, we document the linear separability of **perturbations** from various poisons, as in [34]. Perturbed images, on the other hand, behave differently. Prompted by your feedback, we report logistic regression train accuracy on various CIFAR-10 poison images below, and will include these results in our camera ready version:
| Data | Train Accuracy |
|---------------------------|------------------|
| Clean | 53.88 |
| Unlearnable Examples | 100 |
| Adversarial Poisoning | 49.47 |
| AR (L2) | 37.99 |
| Robust Unlearnable | 67.21 |
| LSP | 100 |
| OPS+EM | 100 |
| (CW) OPS | 100 |
| (CW) Unlearnable Examples | 100 |
| (CW) Random Noise | 100 |
In the table above, we find that Unlearnable Examples, LSP, OPS, and class-wise (CW) poisons have linearly separable poison images, but the remaining poisons we consider do not.
Thank you again for your thoughtful review. We made a significant effort to address your feedback including experiments and multiple paper edits, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Comment: I appreciate the efforts made by the authors to answer my questions.
Firstly, your clarification helped me understand why your proposed orthogonal projection method is effective, especially in defending against class-wise poisoning. Taking CIFAR-10 as an example, we can approximate that the 10 column vectors of Q represent 10 class poisons. Your method precisely makes the processed images orthogonal to that 10-dimensional subspace. I find this idea reasonable.
However, this also highlights the limitation of the method, which is the inability to guarantee the effectiveness for sample-wise poisoning, especially considering some recently proposed defense methods [a, b] that achieve better defense results. Perhaps focusing only on defense against class-wise poisoning would be helpful.
Secondly, I still believe that Sections 4.2 and 4.3 are not sufficiently interesting and have weak connections to the most important Section 4.4 in the overall article.
Therefore, I decided to keep the rating.
[a] Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression. ICML 2023
[b] Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks. arXiv 2023.
---
Reply to Comment 1.1.1:
Title: Clarifications on importance of findings in response to comment
Comment: > limitation of the method, which is the inability to guarantee the effectiveness for sample-wise poisoning, especially considering some recently proposed defense methods [a, b]
Given that class-wise poisons are still being developed in numerous works [10, 33, 34, 18], we believe research would benefit from our novel benchmark attack (Orthogonal Projection). We believe that without knowledge of our results, class-wise poisons could continue to be developed without a strong baseline to compare against. In fact, one of the reasons we developed the Orthogonal Projection method was because we could not find a good explanation for why papers opted for sample-wise poisons when class-wise poisons were more effective [10]. Our work demonstrates that removing a class-wise perturbation is not as simple as taking the average class image (see rebuttal pdf), and that perturbations can be recovered, as we illustrate in Figure 3.
While ISS [b] is effective, there is little theoretical evidence to suggest that this effectiveness is “guaranteed.” While the authors explain that “grayscale compression is used to eliminate low-frequency shortcuts, and JPEG compression is used to eliminate high-frequency shortcuts,” a DCT transform is never performed to analyze whether this is occurring or not. On the other hand, vector projections and orthogonal subspaces can be reasoned about for images theoretically, and tested empirically as we do in Section 4.4 and Appendix A.3. Compared to ISS [b], our Orthogonal Projection method produces higher test accuracy on the recently published class-wise OPS poison (87.94% (ours) vs 85.16% [b]).
[10] Unlearnable Examples: Making Personal Data Unexploitable, ICLR 2021
[18] Learnability lock: Authorized learnability control through adversarial invertible transformations, ICLR 2022
[...] remaining citations are from paper references
> Sections 4.2 and 4.3 are not sufficiently interesting
Section 4.2 presents a new way of analyzing learned representations of poisoned models. Deep feature reweighting allows us to probe poisoned models for useful features and evaluate unlearnable dataset methods in a new way. We find it interesting that some unlearnable datasets produce checkpoints that progressively improve test accuracy during poison training.
Section 4.3 presents a counterexample to the linear separability hypothesis [34]. Without knowledge of this result, one could be inclined to believe that data poisoning is extremely simple, relying on only linearly separable perturbations despite a wide variety of different optimization objectives and theory used to generate/optimize the poisons. We find it interesting that newer methods [26] can be more complex (not linearly separable) and still work as poisons. This result could inspire future research on non linearly separable perturbations, given that necessary conditions for unlearnable datasets remains an open problem.
> weak connections to the most important Section 4.4
We can see how the paper’s organization could suggest a deeper connection between our findings in 4.2 and 4.3 and the Orthogonal Projection method. But, as we state in the abstract and introduction, the goal of our paper is “make a number of findings that call into question [Unlearnable Datasets’] ability to safeguard data.” Each section of the paper (4.2 to 4.4) is a finding that, we believe, can be viewed on its own and can inform the poisoning community about a fundamental issue of current unlearnable dataset methods.
For example, Unlearnable Datasets can be seen as not suitable to protect data because one can learn generalizable features from poisoned data (Section 4.2). On the other hand, Unlearnable Datasets can be seen as not suitable to protect data because class-wise perturbations are still commonly used and we can develop a method to find and remove them (Section 4.4).
—
Thank you again for taking the time to read and discuss our work with us. We appreciate your insights and have made additional paper edits following this response, in addition to the new table from your previous suggestion in this thread. Are there other specific suggestions that, if addressed, would potentially warrant a reconsideration of the current rating? Your guidance would greatly assist us in refining the paper's content. | Summary: This paper conducts an analysis of the properties of unlearnable dataset methods to evaluate their potential for future viability and security assurances. It is demonstrated that neural networks possess the ability to learn generalizable features from unlearnable datasets, while also suggesting that image privacy may not be effectively preserved. Additionally, a counterexample is provided to challenge the widely held belief that unlearnable datasets induce learning shortcuts through the linear separability of added perturbations. To address this issue, an orthogonal projection attack is proposed, which enables learning from various unlearnable datasets. The results of this approach demonstrate that linearly separable perturbations should not be relied upon.
Strengths: 1. The originality of the paper is good, as it gives us a different view to unlearnable examples and poisoning attacks by
(a). Demonstrating that neural networks possess the ability to learn generalizable features from unlearnable datasets
(b). Giving a counterexample is provided to challenge the widely held belief that unlearnable datasets induce learning shortcuts through
the linear separability of added perturbations
(c). Proposing a new evaluation framework and a novel attack method to assess the viability and security promises of unlearnable datasets.
Overall, The paper's findings challenge some widely held beliefs about unlearnable datasets and provide insights into their limitations and
potential vulnerabilities
2. The proposed method can achieve good results on some of unlearnable tasks (class-wise) and is more effective than adversial training.
3. This paper is well-writen
Weaknesses: 1. The motivation for the paper does not align to me. I hope that the proposed method is motivated by the findings (Sections 4.2 and 4.3), however, the authors only illustrate the relationship between linearly separable perturbations and the proposed orthogonal projection method, making Section 4.2 a superfluous. There should be more discussion on the relationship between the findings and the methods presented in Section 4.4.
2. The author uses the DFR method to prove that DNN can learn useful features, but it is not convincing to me. The improvement in accuracy may simply come from using cleaner samples. (eg Figure 2, pictures (b), (c)). Also, I think the act of loss should also be involved in this part. There should be more references to why "the higher the test accuracy after DFR, the more likely the model will pick up on private image features present in the original clean data".
3. Although the authors give a counterexample to challenge the commonly held view that non-learnable datasets induce learning shortcuts by adding perturbed linear separability, it lacks theoretical analysis.
4. The method only achieves satisfactory results on Class-Wise perturbation, but the generalization of the method is not enough. In contrast, adversarial training has better generalization ability.
5. Limited evaluation: Evaluation requires more datasets.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The correlation between the method proposed in Section 4.4 and the findings in Sections 4.3 and 4.2, particularly 4.2, is of interest.
2. Can additional sources be provided to corroborate the assertion that "the higher the test accuracy after DFR, the more likely it is that the model has earned private image features present in the original clean data"?
3. The efficacy of the proposed method on other datasets merits investigation. It is recommended that further experimentation be conducted.
4. The dissimilar trends displayed by (a) (d) and (b) (c) in Figure 2 require explanation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the authors adequately addressed the limitations.
However, in order to further enhance and augment the credibility and validity of their work, it is suggested that they provide a more elaborate and comprehensive explanation regarding the various methodologies and motivations that were employed in their research process. Additionally, it would be highly beneficial if they could include a greater number of experiments and references within their work, as this would serve to substantially bolster and fortify the robustness and soundness of their findings and conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful review, for mentioning that our work “gives us a different view to unlearnable examples,” and for recognizing that our results “challenge some widely held beliefs about unlearnable datasets.” Those were our goals in this work.
> The author uses the DFR method to prove that DNN can learn useful features, but it is not convincing to me. The improvement in accuracy may simply come from using cleaner samples
By finetuning a randomly initialized network (first row of Table 1) on a random subset of 5k clean samples, we can measure the effect of the clean samples. By doing the same procedure on poisoned networks and measuring the improvement **relative to** the finetuned randomly initialized network, we can measure the utility of poisoned-network features. Looking at Figure 2, plot (b), suppose we obtain checkpoints by training on class-wise random noise at epochs 3 and epoch 40. If we finetune the epoch 3 checkpoint (with 5k clean samples) we get test accuracy of 66.8%, while if we finetune epoch 40 checkpoint (with the same 5k clean samples) we get test accuracy of 19.2%. Relative to a finetuned random-init checkpoint (35.9% test acc), epoch 3 features obtained by training on the perturbed unlearnable data, yield more than a 30% increase in test accuracy. Useful features, which help test set generalization, are therefore present at epoch 3 but not at epoch 40. We found it surprising that despite both checkpoints being trained on unlearnable data, the learned features could still rise to such high performance.
> The method only achieves satisfactory results on Class-Wise perturbation, but the generalization of the method is not enough
Our Orthogonal Projection method is designed to exploit class-wise perturbations by design. It is not meant to be a general solution. Nearly all Unlearnable Dataset papers focus on optimizing sample-wise noise [10,7,26,35,27,34,33], as opposed to class-wise noise, with little explanation why. We designed Orthogonal Projection to answer, for ourselves, why class-wise perturbations should not be relied on. In [10], the authors note that despite class-wise error-minimizing noise being “superior to random noise” and “more efficiently and more flexibly in practical usage,” the noise “may get more easily exposed.” In [26], the authors state that “because class-wise perturbations can be recovered by taking the average image of a class, these should therefore be easy to remove” but do not provide evidence. If class-wise noises are more easily exposed, how does one remove them? We found that although one can average a class image, it is difficult to train a new network to reasonably high test accuracy by removing the average image. Instead, our solution is to project image features orthogonal to features optimized by logistic regression. Our Orthogonal Projection method is specifically for class-wise perturbations, which have recently been used to lock and protect data [18].
[10] Unlearnable Examples: Making Personal Data Unexploitable, ICLR 2021
[18] Learnability lock: Authorized learnability control through adversarial invertible transformations, ICLR 2022
[26] Autoregressive Perturbations for Data Poisoning, NeurIPS 2022
[...] remaining citations are from paper references
> "the higher the test accuracy after DFR, the more likely it is that the model has learned private image features present in the original clean data"
Unlearnable dataset perturbations are supposed to stop the model from learning useful features, and if the model gets high test accuracy by training on them, that demonstrates that it has learned useful features. Thank you for pointing out our ambiguous language. We have updated our draft to clarify and to remove the confusing “private image features” terminology. We will include this update in our camera ready version.
> The efficacy of the proposed method on other datasets merits investigation. It is recommended that further experimentation be conducted.
In the Supplementary Material, we include additional experiments on SVHN, CIFAR-100, and an ImageNet subset (Appendix 2.2) to enhance the results of Section 4.2. Moreover, we include additional experiments on CIFAR-100 in Appendix 3.2 to augment Section 4.4. We also provide additional results for different model architectures in Appendix 2.1 and Appendix 3.1.
>The dissimilar trends displayed by (a) (d) and (b) (c) in Figure 2 require explanation.
Thank you for pointing this out. We have added additional explanation to the text following Figure 2. To clarify: This behavior is due to the different poison generation methods. More specifically, adversarial poisoning perturbs images with error-maximizing noise (and this noise has been shown to be actual features models use during classification [36]). In this case, DFR is reweighting useful, existing features for classification, leading to higher test accuracy. On the other hand, Random Noise and AR poisons do not perturb images with useful features; instead, both perturb with synthetic noise. In these cases, useful features are still learned during poison training, but only in the first epochs of training. As training progresses, the model checkpoints are continually corrupted by synthetic noise features which cannot be useful for classification despite reweighting. By “useful,” we mean the technical definition in Eq. 1 [36].
[36] Adversarial Examples Are Not Bugs, They Are Features, NeurIPS 2019
Thank you again for your thoughtful review. We made a significant effort to address your feedback including multiple paper edits, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Comment: While your previous explanation helped address some of my questions, there are still a couple major issues that need to be better clarified:
1) It would be helpful to see a stronger link between the orthogonal projection method proposed in Section 4.4 and the results from Sections 4.2 and 4.3, especially 4.2. The proposal seems like it should be motivated by those earlier findings, but right now the paper only shows the connection to linearly separable perturbations, making Section 4.2 seem unnecessary. Please illustrate more clearly how the observations in 4.2 and 4.3 led to the proposed technique.
2) For the weakness 2, the evaluation should include more experiments and metrics beyond just test accuracy, such as loss over training, Relying solely on test accuracy doesn't fully support the claim. Including additional graphs would provide a more thorough analysis.
---
Reply to Comment 1.1.1:
Title: Clarifications and new metrics in response to comment
Comment: > Please illustrate more clearly how the observations in 4.2 and 4.3 led to the proposed technique
We can see how the paper’s organization could suggest a deeper connection between our findings in 4.2 and 4.3 and the Orthogonal Projection method. But, as we state in the abstract and introduction, the goal of our paper is “make a number of findings that call into question [Unlearnable Datasets’] ability to safeguard data.” Each section of the paper (4.2 to 4.4) is a finding that, we believe, can be viewed on its own and can inform the poisoning community about fundamental issues of current unlearnable dataset methods.
For example, Unlearnable Datasets can be seen as not suitable to protect data because one can learn generalizable features from poisoned data (Section 4.2). On the other hand, Unlearnable Datasets can be seen as not suitable to protect data because class-wise perturbations are still commonly used and we can develop a method to find and remove them (Section 4.4). In both cases, model parameters are optimized using unlearnable data – in Sec 4.2 the optimization yields generalizable features, while in Sec 4.4 the optimization yields the class-wise perturbation.
We have added additional clarifications to the introduction, we would be open to suggestions as to re-organization of the findings to minimize confusion.
> the evaluation should include more experiments and metrics beyond just test accuracy, such as loss over training
Following your advice, we have measured the train and test loss of each poison checkpoint after DFR and compared it to the loss of a randomly initialized model after DFR. As a reminder, both these losses are computed when evaluating poisoned checkpoints on clean train/test data. The following table performs the same experiment as in Table 1, Section 4.2, where we investigate what poisoned models learn. We use the same random subset of 5,000 clean CIFAR-10 train samples from the paper for DFR.
| Train Data | Min DFR Train Loss | Min DFR Test Loss |
|---------------------------------------|-----------------------|----------------------|
| None | 2.625 | 2.379 |
| Unlearnable Examples [10] | 1.777 | 1.798 |
| Adversarial Poisoning [7] | 0.966 | 1.036 |
| AR [26] | 1.408 | 1.531 |
| NTGA | 1.342 | 1.391 |
| Robust Unlearnable [27] | 1.789 | 1.790 |
| LSP [34] | 1.705 | 1.675 |
| OPS+EM [33] | 1.857 | 1.869 |
| (CW) OPS [33] | 1.669 | 1.697 |
| (CW) Unlearnable Examples [10] | 2.043 | 2.037 |
| (CW) Regions-4 [25] | 1.669 | 1.706 |
| (CW) Random Noise | 1.170 | 1.352 |
In the first row, we report the train and test loss of a randomly initialized RN-18 after DFR. In the subsequent rows, we report min train/test loss of poisoned checkpoints after DFR. We find that the trends when analyzing the loss follow the trends when analyzing test accuracy: Firstly, Adversarial Poisoning, AR, NTGA, and (CW) Random Noise achieve lowest losses – and accordingly those poisoned checkpoints also have the highest DFR test accuracy in Table 1. Secondly, and more interestingly, we find that **all** poisoned models have a lower loss than the finetuned (DFR) random initialized model. This reinforces our claim that models learned useful features from poisoned data. Interestingly, (CW) Unlearnable Examples come closest to the expected random chance loss of -log(1/10)=2.302. This suggests (CW) Unlearnable Examples are best at corrupting network weights from finetuning. Note that train loss is slightly lower than test loss in many cases because train loss is computed over the entire train set (which includes the subset of 5k samples we used for DFR).
Thanks again for your suggestions which have made our work more comprehensive. We have produced two additional plots (DFR train loss vs epoch and DFR test loss vs epoch), and a new table which we will include in our camera-ready version. We would appreciate it if you would consider revising your score in light of our response. And do let us know if you have any additional questions. | Summary: This paper studies the problem of the actual learnability of unlearnable datasets. Specifically, the authors have demonstrated that unlearnable datasets that are generated by existing methods can actually be used to learn generalizable features. In addition, the authors show that it is not necessary to make poisons linearly separable for achieving effective poisoning effects. Furthermore, based on the fact that most existing poisoning methods rely on linear separability, the authors propose a simple yet effective countermeasure to recover clean images for training.
Strengths: - It is valuable to revisit existing work in a popular field like data poisoning.
- The paper is very well written, with a thorough review of related work and sufficient descriptions of technical details.
- Experiments are extensive in terms of the number of models, datasets, settings, compared baselines, and so on.
- Several visualizations are provided to help support new findings.
Weaknesses: I very much appreciate the paper, and there are no major weaknesses but minor ones about tuning down some claims:
1. This paper is not the first to find a counterexample to the linear separability hypothesis considering that previous work has intentionally relied on ideas beyond linear separability for generating unlearnable datasets [a].
2. The orthogonal projection method is not the only simple yet effective countermeasure considering [b], where the simple image compression-based method, ISS, is applied to remove poisoning perturbations. About the results, it seems that ISS is more globally effective than the orthogonal projection method. In addition, It would also be appreciated if diffusion models [3] and error-maximizing augmentation [20] can be compared. These two methods are also conceptually simple because they rely on either a per-trained (diffusion) model for pre-processing or data augmentation (that may not necessarily be adversarial).
[a] Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning? ICLR 2023
[b] Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression. ICML 2023
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback. We appreciate your mentioning that our “experiments are extensive'' and that the paper "is very well written.”
> previous work has intentionally relied on ideas beyond linear separability for generating unlearnable datasets [a]
Thank you for letting us know about the work of [a], which is relevant to our findings in Section 4.2. While the authors of [a] motivate their approach using separability of recent poisoning perturbations, they only use t-SNE visualizations, which is different from the separability experiment we perform.
More specifically, t-SNE cluster separability should not be equated to the linear separability we measure in Table 2 because it is possible to have linearly separable data that, when plotted using t-SNE, appears not separable. In other words, the poison from [a] could still contain linearly separable perturbations.
> ISS is more globally effective than the orthogonal projection method
Our Orthogonal Projection method is designed to exploit class-wise perturbations by design. It is not meant to be a general solution like ISS [b]. Nearly all Unlearnable Dataset papers focus on optimizing sample-wise noise [10,7,26,35,27,34,33], as opposed to class-wise noise, with little explanation why. We designed Orthogonal Projection to answer, for ourselves, why class-wise perturbations should not be relied on. Compared to ISS [b], our Orthogonal Projection method produces higher test accuracy on the recently published class-wise OPS poison (87.94% (ours) vs 85.16% [b]).
[...] remaining citations are from paper references
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal!
Comment: My concerns are well addressed. The authors are encouraged to incorporate the above discussions into the final version. | Rebuttal 1:
Rebuttal: We'd like to thank everyone again for their reviews. A few reviewers mentioned wanting to see average images of a class to compare them to the learned weights from the first step of Orthogonal Projection. In the attached PDF, we include an additional figure which performs this visualization for two class-wise poisons. We will include this update in our camera ready version.
Pdf: /pdf/fade2179a1e8fa443b32b584dc2b17b5c0082d8f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Augmented Memory Replay-based Continual Learning Approaches for Network Intrusion Detection | Accept (poster) | Summary: The paper considers the application of deep learning (DL) for network intrusion detection systems (NIDS). The paper correctly points out that, in real contexts, NIDS must be continously updated with new data-points to mitigate the impact of "concept drift". A way to do so is by employing "continual learning" (CL) methods---which, unfortunately, are affected by many issues which the paper seeks to address.
Specifically, the main contribution is a novel approach to deal with (i) class-imbalance problems leading to "catatrophic forgetting" (CF); and (ii) the computational overhead deriving from ways to mitigate CF. These contributions, whose intuitions are based on empirical evidence, are experimentally assessed on many datasets. The results show a good improvement over the baselines.
Strengths: + Relevant problem
+ Large evaluation
+ Good results
+ (somewhat) theoretically grounded
+ Multiple repetitions
+ Good quality of writing
+ The supplementary material is rich with details
The paper addresses a relevant problem within the machine learning (ML) domain, which has received only limited attention in the specific context of network intrusion detection (NID). The proposed methods are theoretically grounded (especially Section 3.1), and the findings are derived by a large sets of experiments carried out on various datasets (which are not limited to NID data) and considering many baselines. The conclusions are drawn by repeating the experiments 5 times, increasing the overall soundness of the results---showing a substantial improvements over the baselines.
Weaknesses:
## High Level
- Inappropriate datasets
- (somewhat) confusing theoretical arguments
- Bold assumptions
- (risk of) data snooping
- (some) missing details
## Low Level (and suggestions)
Below is an extended description of the abovementioned weaknesses. I will also provide some actionable means to rectify such issues, as well as abundant references that can be used to improve this paper and/or support some of my critiques.
### Inappropriate datasets
This is by far the biggest concern I have with the paper: the data used as basis for the experiments is inappropriate to test the hypothesis and provide a _convincing_ answer. Let me explain.
First, the paper deals with the problem of NID. As such, _any finding that derives on data that does not pertain to NID is redundant_. This automatically makes all the evaluations carried out on, e.g., on CIFAR, SVHN, CLEAR to be of zero-value to the NID community (do note that a recent paper [D] highlighted that many papers on security problems carry out evaluations on data that does not reflect a security context, leading to skepticism by practitioners). I acknowledge that these datasets were added due to some limitations of (some) existing datasets for NID, but this is not an acceptable reason: if the authors were better aware of the NID context, they would know that there are (publicly available) datasets that allow a better representation of a real CL scenario. (more on this below).
Second, among the chosen NID datasets, there is the NSL-KDD and KDD Cup'99. These datasets are well-known to be flawed [C]; furthermore, they are also almost 25 years old, and the security community does not find them to be of any interest from a practical viewpoint [B, I]. Hence, even these experiments do not allow to provide a convincing argument in favor of the paper's conclusions.
Third, the NID datasets also include the CIC-IDS17 and its enhanced variant, the CIC-IDS18. Unfortunately, also these two datasets are flawed [G, H]. Note that [G] came out in 2021, and it has already been well-received by the NID community (e.g., [A, F]), so it is concerning that this paper (which has been submitted to a top-venue such as NeurIPS) performs the experiments on the "flawed" variant of these two datasets---especially given that a "fixed" version exists (provided both in [G] and [H]). In short, these experiments are questionable; plus, what is even more questionable is that CIC-IDS17 and CIC-IDS18 are used to derive some observations that motivate the theoretical design of the proposed method.
Fourth, overlooking the previous two points, all the NID datasets span over a very short period of time, and do not enable any assessment that can be used to test "time-aware" applications of machine learning-based NIDS [F] -- or, at least, do so in a way that is appropriate for NeurIPS. The only exception could, potentially, be ANONIDS: however, such a dataset contains data-points from different networks, which raises many concerns [S, O]. Hence, even this dataset has dubious utility for the problem at hand.
The way to circumvent all these issues is not to use "image datasets", but rather to use NID datasets captured (i) over a long period of time and (ii) in the same network: _such datasets **exist**_, and a prominent example is the MCP dataset, which is built upon the well-known CTU13 dataset (see [J,K]).
To summarize: all the experiments carried out in the paper are performed on datasets that are inappropriate to test the underlying hypothesis without there being a sensible reason to do so.
### Bold Assumptions
This is a "pragmatic" weakness, which does not invalidate the paper, but significantly limits its real-world relevance.
Put simply, the paper proposes a method that is rooted in the application of "deep learning" (DL) for NID. The problem, however, is that real developers of ML-NIDS are very skeptical of DL in NID [L]. The reasons are many, but at the basis of this is that they are hardly explainable [M], but also that they are outperformed by "traditional" machine learning approaches (assuming that they can be applied). For example, [E] shows that multi-layer perceptrons are inferior to a random forest from a detection-performance perspective. Moreover, [B] reveals that fine-tuning a DL method requires months of data. Finally, the recent [F] shows that training "shallow" methods (such as decision trees) is very short. Note that [F] uses similar datasets as the ones used in this paper: to give some context, training a binary classifier using a decision tree on UNSW-NB15 requires less than 5s, whereas the MLP used in this paper requires 350s (looking at Table 3) -- and the experiments done in this paper entail much better hardware.
In light of this, it is questionable whether the proposed method has any relevance in reality. Yes, perhaps it provides some advantages -- but if the price of such advantages is an unacceptably high training time (which the method itself seeks to reduce) then the overall contribution of this paper to the state-of-the-art is low.
To address this issue, I invite the authors to include also "shallow" methods that do not entail (deep) neural networks, and show that the proposed method allows the resulting architectures to achieve a comparable degree of performance of "shallow" methods.
Notwithstanding, there is another "bold assumption" made in the paper: the fact that the "malicious classes" are going to remain stable. Indeed (to the best of my understanding), the paper seeks to analyze the effect of distribution shift of the "benign" samples (this is explicitly stated in the introduction), and the set of malicious classes is always known beforehand. This is quite unrealistic (and it had been known since [O]), and may further decrease the value of the proposed method in practice: what would happen in case a new "unknown" attack appears, and is then assigned a "new" label?
On this note, the other "practical" limitation is that the paper assumes that there will be an influx of (correctly labelled) samples that are fed into the system. How is this done in practice? Abundant work (see [N] for a summary) pointed out that doing this is expensive, and especially so in NID context. As a matter of fact, what is not shown in the paper is a clear "use-case" that depicts how the system is designed to be deployed in practice: without such a schematic, it is difficult to determine if the proposed method is even "conceivable" to be deployed into real systems (as also highlighted in [F], practitioners are very interested in the "system infrastructure" envisioned in research papers). Note that I did look in the supplementary material, but the "schematic" provided in Figure 3 does not allow to answer my doubts: the "continual learning module" appears to be a "black-box" that receives its inputs from a "training dataset". However, how is such training data collected (given that we are in an "continual learning" setting)? The way I see it, it requires an enormous amount of manual effort.
### (somewhat) confusing technical arguments
While I had no issues in understanding the rationale of Section 3.1 (aside from the following unclear sentence ```In our approach, we choose this parameter based on global information.```), I found that the description of Section 3.2 to be lacking in terms of clarity and soundness. Setting aside that the experiments are based on "inappropriate" datasets, I did not find compelling evidence of ```virtual SGD parameter update is a slowly varying process.``` -- and this may be due to Figure 3 not being introduced in any way (e.g., what do the x and y axis report? what is the line? the y-axis is also not uniform).
I also did not find the "motivation" (mentioned in the Introduction) that leads up to Section 3.2 to be clear. To my understanding, the problem is that the implementation of memory-replay techniques is expensive from a computational perspective---but is this really the case for NID? As I mentioned above, "shallow" methods are much faster to train.
Also, at the end of the Introduction it stated that ```[PAPA] lead[s] to improved scalability.```. What does this mean? The key-term is "scalability". I do not see any evidence in Section 3.2 that makes me believe that PAPA leads to better scalability. I invite reading [P, Q].
### Potential Data Snooping
I have reason to believe that the methodology followed in the paper is affected by the "data snooping" problem [15]. In other words: the proposed method is designed in a way that would not be typically "known" in advance.
This is epitomized by Section 3.2, wherein some conclusions are drawn by analyzing CIC-IDS18 (and CIFAR), and then used to develop the proposed methodology. At the same time, this is also likely to be present in the experimental setup:
> The intuition behind the memory size is to store nearly 1% of the total training sample of the benchmark dataset in the buffer memory, and 75% of the buffer memory samples will be used for replay.
The point is that a real developer has no clue about the size of the "training dataset": in real settings, an organization will collect some training data, and then use it to train a model; however, they would not know "how much data will appear in the future". It is possible to address this later issue by assuming a "fixed amount" (e.g., 1000 samples) which does not depend on the size og a given benchmark (this is what is done, e.g., in [N]).
### Some additional issues:
* In the caption of Figure 1, it is not stated what "m" refers to
* Bad formatting on Page 6 (the text overlaps with Algorithm 2)
* Also about formatting: tables and figures (and their captions) are almost impossible to discern from the text itself. I think the authors played too much with margins.
* Given the importance of hardware in determining the runtime [F], the main paper should include also details on the experimental platform.
* There is an excessive usage of acronyms in the paper. For instance, SBCP occurs only 3 times (and is strikingly similar to CBRS). Also, MR and MIR are very similar in how they appear, but they denote different concepts. HIDS is also redundant.
* Reference [14] is incomplete
* In Section 3.1, I had a hard time distinguishing between terms such as "maximal", "majority", "full", "largest."
* The "Limitations" are only mentioned in the supplementary material
* The features used to train the models are not mentioned (I found such a lack to be surprising given that the paper cites [15]). Some network features have been demonstrated to be redundant for the sake of NID classification (see [G, H, S]), and not mentioning them casts doubts on the experiments done in the paper. Note that I did not find any mentioning of these also in the supplementary material
* I was surprised of not finding [A] among the cited works, and especially among the "comparison" methods, given that it considers a very similar problem (and is also evaluated on the --fixed-- CIC-IDS17)
* Please report the size of the training / test data. I couldn't find a clear mention of this in the experimental section (note that this is a well-known problem in related literature [N]).
* The "bread and butter" of CL is that the performance should be measured "over-time". However, given the way the experiments are designed (at least based on how the results are presented), I cannot understand if this is truly the case.
The following is a list of paragraphs in the text for which I have concerns:
> NIDS must evolve continuously and effortlessly learn from the limited novel attack data.
What does "limited novel attack data" mean? How is such "novelty" determined?
> Furthermore, formulating NID as a supervised binary classification problem (SBCP) will be helpful in differentiating out-of-distribution normal samples from known intrusions well.
Isn't this the de-facto standard in many NIDS evaluations? I think this statement is redundant.
> (e.g., the difference between the number of samples of the minority classes DDOS attack-HOIC and SQL Injection of the CICIDS-2018 [18] dataset is 0.68 million).
Unclear.
> However, this strategy has a pitfall when the count of minority class samples in the finite buffer memory appears to come from a majority class.
Unclear. Plus, what does such "pitfall" lead to?
> making our approach suitable for large-scale training.
What does this mean? Plus, is this a problem in NID?
> Typically, the benign class samples must be chosen for replacement whenever new minority class samples arrive.
Define "typically". Is it "typical" in research? Is it typical "in practice"? On what grounds is this statement made?
> we use original class labels to organize the buffer memory to learn SBCP and chose class samples in memory with higher running statistic values for the replacement to accommodate newly arriving samples.
This statement is clear, but it contrasts with the underlying assumption of seeing NID as a binary classification problem. Indeed, providing (accurate) fine-grained attack labels is a tough problem in NID [N, R].
> This ensured that each task contained a mix of benign and attack data, maintaining the class imbalance resembling real-world network traffic.
How is the "resemblance" achieved?
> We created five tasks for KD249 DCUP’99 [22] and NSL-KDD [49], ten 250 for CICIDS-2017/2018 [18, 50], nine for UNSW-NB [24, 51, 52, 53, 54], and ten for ANOSHIFT [25] benchmark contains naturally occurring distribution shifts spanning over ten years of network traffic.
The last sentence does not connect with the previous ones.
> We use M=13333 for CICIDS-2017/2018, Anoshift, 5333/1333 for KDDCUP’99/NSL-KDD, 500 for SVHN, CIFAR-10/100 and 666/2666 for CLEAR-10/100.
I do not understand why the first settings have only "M", and the others have two numbers.
### EXTERNAL REFERENCES
[A]: Andresini, Giuseppina, et al. "INSOMNIA: towards concept-drift robustness in network intrusion detection." Proceedings of the 14th ACM workshop on artificial intelligence and security. 2021.
[B]: Apruzzese, Giovanni, et al. "The role of machine learning in cybersecurity." Digital Threats: Research and Practice 4.1 (2023): 1-38.
[C]: Kim, Daniel E., and Mikhail Gofman. "Comparison of shallow and deep neural networks for network intrusion detection." 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2018.
[D]: Apruzzese, Giovanni, et al. "“Real Attackers Don't Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice." 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023.
[E]: Pontes, Camila FT, et al. "A new method for flow-based network intrusion detection using the inverse Potts model." IEEE Transactions on Network and Service Management 18.2 (2021): 1125-1136.
[F]: Apruzzese, Giovanni, Pavel Laskov, and Johannes Schneider. "SoK: Pragmatic Assessment of Machine Learning for Network Intrusion Detection." IEEE EuroS&P (2023).
[G]: Liu, Lisa, et al. "Error Prevalence in NIDS datasets: A Case Study on CIC-IDS-2017 and CSE-CIC-IDS-2018." 2022 IEEE Conference on Communications and Network Security (CNS). IEEE, 2022.
[H]: Engelen, Gints, Vera Rimmer, and Wouter Joosen. "Troubleshooting an intrusion detection dataset: the CICIDS2017 case study." 2021 IEEE Security and Privacy Workshops (SPW). IEEE, 2021.
[I]: Silva, João Vitor Valle, Martin Andreoni Lopez, and Diogo MF Mattos. "Attackers are not stealthy: Statistical analysis of the well-known and infamous kdd network security dataset." 2020 4th Conference on Cloud and Internet of Things (CIoT). IEEE, 2020.
[J]: Dietz, Christian, et al. "DMEF: Dynamic Malware Evaluation Framework." NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium. IEEE, 2022.
[K]: Venturi, Andrea, et al. "Drelab-deep reinforcement learning adversarial botnet: A benchmark dataset for adversarial attacks against botnet intrusion detection systems." Data in Brief 34 (2021): 106631.
[L]: De Shon, Markus. "Information Security Analysis as Data Fusion." 2019 22th International Conference on Information Fusion (FUSION). IEEE, 2019.
[M]: Jacobs, Arthur S., et al. "AI/ML for Network Security: The Emperor has no Clothes." Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. 2022.
[N]: Apruzzese, Giovanni, Pavel Laskov, and Aliya Tastemirova. "SoK: The impact of unlabelled data in cyberthreat detection." 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). IEEE, 2022.
[O]: Sommer, Robin, and Vern Paxson. "Outside the closed world: On using machine learning for network intrusion detection." 2010 IEEE symposium on security and privacy. IEEE, 2010.
[P]: Hill, Mark D. "What is scalability?." ACM SIGARCH Computer Architecture News 18.4 (1990): 18-21.
[Q]: Luke, Edward A. "Defining and measuring scalability." Proceedings of Scalable Parallel Libraries Conference. IEEE, 1993.
[R]: Van Ede, Thijs, et al. "Deepcase: Semi-supervised contextual analysis of security events." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022.
[S]: Apruzzese, Giovanni, Luca Pajola, and Mauro Conti. "The cross-evaluation of machine learning-based network intrusion detection systems." IEEE Transactions on Network and Service Management (2022).
[T]: Pendlebury, Feargus, et al. "TESSERACT: Eliminating experimental bias in malware classification across space and time." Proceedings of the 28th USENIX Security Symposium. USENIX Association, 2019.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I thank the authors for carrying out the research discussed in this paper. Ultimately, I **really** liked the paper: the research direction is crucial, the proposed methodology is "sensible", the evaluation is massive and (on the surface) rigorous. However -- as discussed in my review -- I have some doubts on the "practical" effectiveness of the proposed method.
Below is a set of questions that I invite the authors to answer. I must stress that my goal, here, is to assist the authors in realizing a work that is "outstanding": given the effort put by the authors in this submission, I have reason to believe that the authors are capable of this.
* Q1) What is the envisioned system infrastructure in which the proposed method is meant to be deployed?
* Q2) Can the authors, in the (short) timespan allotted for NeurIPS'23 review process, carry out additional experiments on a better dataset? (see my review)
* Q3) Can you explain Figure 3 in detail?
* Q4) What features are analyzed by the NIDS models?
* Q5) Please provide convincing arguments that the paper is not affected by data-snooping -- and, if this is not the case, then provide feasible ways in which the paper can be amended.
* Q6) Please elucidate how the "temporal" aspect is accounted for in the experiments. Suggestions are in [A, F, T].
Please answer these questions fairly.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are stated, but only in the supplementary material. However, according to this reviewer, these limitations should be expanded significantly (the reasons are provided in my review). For instance, most assumptions are overly optimistic (i.e., immutable attack landscape, widespread deployment of deep neural networks for NID) and the data hardly reflects a real network and/or which is in constant evolution.
## UPDATE AFTER AUTHOR's RESPONSE
I am increasing my score from a 5 to a 6, as well as the "Contribution" (from a 2 to a 3) in light of the clarifications and experiments carried out by the authors during the discussion phase.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely request the reviewer ZBnA read the **responses to weakness section** mentioned in the pdf (we could not specify here due to space limitations) before reading our responses to questionnaires.
**RESPONSES TO QUESTIONARIES**
**R1**: The proposed mode does not require huge system infrastructure (memory and processing resources) as it uses a simple multi-layer perceptron (hardly with 5 to 6 hidden layers), which is a low complexity (capacity) architecture compared to large-size architecture like ResNet which has stacked convolution operations. So, it can also be deployed on edge devices. Further, our solution is based on deep models, and we recommend using MLOps for low-latency inference in real-world deployments.
**R2**: To strengthen our findings, we carried out additional tests using CTU-13 [K], the newer CICIDS2017 and CICIDS-2018 datasets ([G]). These results are presented in the Table2 of the PDF. Notably, our proposed methods consistently perform well on these datasets, aligning with the outcomes we previously shared in the paper. But, we could not run GSS-greedy and A-GEM because their competitiveness is equal GEM method and also due to the short deadline.
**R3)** Memory replay-based continual learning methods, such as MIR, select specific samples from memory based on their impact on the model's parameter updates. They prefer samples with higher resulting losses. While these updates occur frequently during incoming data batches, they are only used for sample selection and not applied afterward. However, these frequent updates can introduce substantial computational overhead, especially for long data streams training deep models, due to the involvement of complex computations on large weight matrices. To address this, we investigated the relationship between regular and virtual updates of model parameters. Through t-SNE visualizations on datasets like CICIDS-2018, CIFAR-10, and CLEAR-10, we observed that virtual updates change gradually and tend to align with regular updates. To capture this phenomenon, we modified regular updates to approximate virtual ones. The challenge was quantifying this change. We approached this by examining the distribution of errors between virtual and regular updates across various model parameters, discovering a skewed Gaussian distribution. We used a two-component Gaussian mixture model (GMM) to represent this distribution, preserving individual error distributions for each parameter."
**R4**: Here, we provide a list of features analyzed for each dataset.
Regarding the newer CICIDS2017 and CICIDS-2018 datasets recommended by the reviewer, these datasets initially included more than 90 features. However, we removed flow-specific identifiers like flow ID, source and destination IPs, and timestamps, leaving approximately 51 features. Feature engineering was conducted using Pearson correlation with a threshold of 90%. Subsequently, the selected features underwent min-max normalization before training.
Older CICIDS 2017 and CICIDS-2018 (used in prior experiments): These datasets initially contained nearly 80 features. Initially, we removed flow specific features. Then we excluded redundant features like Bwd PSH Flags, Bwd URG Flags, and another six features as they contain invalid/zero entries and followed by normalizing data using a min-max normalizer. Then, we removed duplicate and inconsistent rows (similar feature values but with different labels). Eventually, we started the training process with the remaining 70 features.
CTU-13: As recommended by this reviewer, we used the dataset from [K]. We perform min-max normalization on this dataset before training.
KDDCUP & NSL-KDD: For these datasets, "service","is_host_login", and "num_outbound_cmds" features were removed as they were found to contain redundant values.
USNW-NB15: For this dataset, we removed 'srcip','sport','dstip','dsport','Stime','Ltime' features. We performed a categorical data encoding for the features like ‘proto’, ‘state’, and ‘service’, followed by min-max normalization, after which the training process was invoked with the remaining features.
**R5)**: We have clarified potential data snooping in the previous section (Addressing Weakness). Here, we extend the discussion on the training process. The test set was strictly avoided throughout our training experiments. All experiments exclusively used training or validation sets, and data normalization relied on min-max techniques with the entire dataset, ensuring no test data influence on outcomes.
Furthermore, we acknowledge the removal of timestamp features from certain datasets, which does not qualify as temporal snooping due to the formulation of the intrusion detection problem in continual learning. Our experiments operated in a domain learning setup, where consistent labels (0, 1) are assigned to each task, with label distributions evolving over subsequent tasks. This approach targets temporal distribution levels for each task, diverging from examining within-class (benign, attack) distribution using timestamps. We suggest employing RNN and LSTM alongside continual learning to capture temporal aspects within each class distribution.
**R6)** The main theme of continual learning is to learn a set of tasks with varying distributions arriving sequentially without forgetting previously learned tasks (avoiding catastrophic forgetting). In this paradigm, the temporal aspect is considered at the distribution level of each task (class incremental, task incremental, and domain incremental) rather than the temporal aspect within the distribution. Based on this intuition, we conducted experiments in the domain incremental learning setting. As a result, we don't consider timestamps in some of our experiments. However, we advocate using RNN and LSTM with continual learning to capture temporal aspects within and between the distributions of different classes.
---
Rebuttal Comment 1.1:
Title: Nice work
Comment: Dear authors,
thank you for your response. I greatly appreciated the new experiments and the improved descriptions -- especially the ones focusing on RF.
I still believe that the experimental settings excessively lean towards the "closed world" assumption, thereby undermining the "real value" of the proposed method. However, in light of the extensive and rigorous set of experiments (which has to be combined with the overall goal of the paper, which is commendable) I am confident that future research efforts can benefit from the research described in this work.
For this reasons, I am increasing my score. However, I also endorse the authors to revise their paper by openly accounting for the "practical limitations" that affect their evaluation (in general, the paper needs a "tone down").
---
Reply to Comment 1.1.1:
Title: Thanking reviewer ZBnA
Comment: Dear reviewer **ZBnA**,
We appreciate the amount of **time** and **effort** in providing the **most insightful** feedback on our submission. We agree to incorporate all your suggestions in the final version of the paper. | Summary: The authors propose techniques for improving how to select samples for
replacement in the memory for continual learning and how to estimate
virtual SGD in MIR to reduce computation.
For replacement in memory, CBRS does not keep track of class counts,
and replacement might occur on non-majority samples in the memory.
Instead the authors propose ECBRS (Extended Class Balancing Reservoir
Sampling) keeps track of class counts (global information) and
replacement prefers classes with the highest count. Each class c has
gamma(c) as the expected count in the memory with weights favoring
smaller classes. Based on gamma(c), the next largest class might be
chosen.
They also propose Perturbation Assistance for Parameter Approximation
(PAPA) for MIR. In MIR sampling from the memory is informed by the
loss from virtual SGD parameter (VSP) updates for each sample, which
incur additional overhead. They observe that VSP overlaps or scatter
around regular SGD parameter (RSP) updates. They modeled the
difference between VSP and RSP with a two-component Gaussian Mixture
Model (GMM). Then they estimate VSP from RST and GMM. The GMM is
trained on one task and used in the remaining tasks.
Empirical results indicate that ECBRS generally outperforms 7 existing
on 12 datasets. ECBRS can also improve the performance of MIR. The
training time for PAPA is lower than MIR on 11 datasets, but achieve
similar accuracy.
I have read the authors' response and commented on them.
Strengths: 1. The idea of estimaing virtual SGD updates is interesting.
2. Empirical results indicate that ECBRS generally outperforms and
PAPA can reduce training time.
Weaknesses: 1. While PAPA is interesting, the reasoning for the error
distribution from one task is applicable to another task could be
further discussed/explored--see questions below.
2. Font sizes for Table 2-4 and some figures are quite
small--difficult to read.
3. Some items can be clarified--see questions below.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Alg 1: "select a class that is the largest, having higher running
statistics value and non-zero samples in the buffer. Otherwise,
select a class with next higher running statistic value that has
m_c >γ(c)."
It seems the next highest class is selected when the largest class
has no samples in the the buffer. This seems to be different from:
line 172: "once it reaches the threshold of the benign class
(gamma(.)= 262), the ECBRS selects the class with
the next highest running static value"
2. line 233: The GMM is trained based on one task and is used for the
other tasks. This implies all the tasks has similar error
distributions, which does not seem to be the case in Figure 3. Learning
a GMM for each task, and compare the GMMs would be helpful.
3. Even if the error distributions are similar for the tasks in the paper,
can other tasks have similar error distributions? If not, a separate
GMM would need to be learned, which might negate the computation
savings in estimating VSP.
4. line 266: "a randomly chosen single attack class and the remaining
nine classified as benign." Would one (large) benign class with
the remaining (small) clases as attacks be more realistic (similar
to the example in Figure 1)?.
Minor comment:
line 174: static -> statistic
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations of the proposed approach do not seem to be mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Alg 1: "select a class that is the largest, having higher running statistics value and non-zero samples in the buffer. Otherwise, select a class with next higher running statistic value that has m_c >γ(c)." It seems the next highest class is selected when the largest class has no samples in the the buffer. This seems to be different from:line 172: "once it reaches the threshold of the benign class (gamma(.)= 262), the ECBRS selects the class with the next highest running static value"**
R1)Dear reviewer, Thanks for raising this question. We acknowledge that this issue caused due to the typo in the presented ECBRS algorithm, and it is required to add the condition m_c >= gamma(c) in line number 169 of the paper. This condition will ensure that irrespective of the higher running statistic value of the class ‘c’, our proposed ECBRS method will not undermine beyond the threshold gamma(c).
**Q3.Even if the error distributions are similar for the tasks in the paper, can other tasks have similar error distributions? If not, a separate GMM would need to be learned, which might negate the computation savings in estimating VSP.**
R2: As mentioned previously, incorporating a two-component Gaussian Mixture Model (GMM) has emerged as one of our key empirical findings. In this context, we aim to shed light on the fundamental nature of the two- GMM components from the perspective of task similarity. The computation of task similarity relies on the optimal transport dataset distance (OTDD).
During this experiment, we design an experiment to learn tasks sequentially from MNIST, CIFAR-10. Before initiating the experiments, we calculate the mean OTDD value for each task, considering its relation to the remaining tasks. However, these values are not uniformly distributed compared to the experiments conducted solely using the CIFAR-10 dataset. This discrepancy suggests a higher dissimilarity within these experiments.
|Dataset|Task1|Task2|Task3|Task4|Task5|Task6|Task7|Task8|Task9|Avg OTDD values acros all tasks(Mean±std)|
|-|-|-|-|-|-|-|-|-|-|-|
|CIFAR-10|258|255|220|243|212|241|225|236|244|237.2298 ±10.067|
|MNIST+CIFAR10|310|**448**|310|**444**|310|**456**|310|**470**|310|374.179±**50.024**|
Furthermore, during these experiments, we observe a significant disparity in the performance values between the MIR and PAPA algorithms. Notably, all these experiments are performed using 20 distinct task orders, and the reported results encompass both mean and standard deviation values.
|Algorithm|PR-AUC(O)|PR-AUC(I)|ROC-AUC|
|-|-|-|-|
|MIR|**0.675±0.053**|**0.700±0.030**|**0.657 ±0.040**|
|PAPA|0.645±0.047|0.661±0.099|0.628± 0.080 |
Additionally, we aimed to validate this observation through a computational approach empirically. In this pursuit, after conducting experiments using the MIR algorithm, we calculated error values for the parameters and tried to fit the Gaussian Kernel Density Estimator (KDE). However, we encountered a singular matrix error while attempting the KDE fit, preventing us from generating a Gaussian approximation. It's noteworthy that this issue doesn't arise in other experimental scenarios. From this experience, we can deduce that the two-Gaussian Mixture Model is feasible whenever the variance in the mean OTDD value of each task concerning the remaining tasks is low.
**Q2) The GMM is trained based on one task and is used for the other tasks. This implies all the tasks have similar error distributions, which does not seem to be the case in Figure 3. Learning a GMM for each task, and comparing the GMMs would be helpful**
R2) We acknowledge the reviewer's concern about the error distribution. In fig3, each row of subfigures refers to the error distribution of the model parameter on the particular datasets. This observation stands as a basis for developing the PAPA approach. To our knowledge, we assume stacking the error distribution figures of the different datasets might have brought ambiguity to the reviewer. Once again, the error distribution for each dataset is displayed in a row labeled with the corresponding dataset name. Based on the response to question 3 (Q3), we advocate computing separate GMM for each task whenever task dissimilarity is high. For instance, in an experimental setting, like training jointly with MNIST and CIFAR-10 dataset.
**Q4)Line 266: "a randomly chosen single attack class and the remaining nine classified as benign." Would one (large) benign class with the remaining (small) clases as attacks be more realistic (similar to the example in Figure 1)?.**
**CIFAR-100** : Following your input, we conducted experiments on the CIFAR-100 dataset, focusing on its super-class labels. Six classes were grouped to form a larger benign task, then split into ten parts. Similarly, the remaining classes (shuffled together) were divided into fifteen parts, and we formed ten tasks. This approach aimed to maintain a class imbalance ratio of 1:5 or 1:20 in combined tasks. Remarkably, these results aligned with our main paper findings. Another notable finding was that the ECBRS and PAPA consistently performed much better than all the other methods. ECBRS outperformed baselines by a large margin as the class imbalance ratio increased from 1:5 to 1:20.
**CLEAR-10**: We also conducted experiments similar to CIFAR-100. Specifically, we used four classes across all the ten years to create a larger benign file and used the rest of the six classes to create 16 different attack classes in such a way that class imbalance per each task will be 1:20. Further, we split the larger benign class into ten parts and create nine tasks. In this experiment, each task will contain one benign and one attack class. Interestingly, even in this setup, the performance results stayed consistent with what we had reported in the main paper.
**Due to the space limitation, we report the results of these experiments in Table 1 of the uploaded pdf**
---
Rebuttal Comment 1.1:
Title: comments on response
Comment: Thanks for your response.
Q2 and Q3:
> we advocate computing separate GMM for each task whenever task dissimilarity is high
You seem to have verified that error distributions could be different and separate GMMs would be needed. This diminishes the results in the paper which are based on assuming the error distributions are similar and the same GMM is used with VSP across tasks/datasets. Perhaps the paper can be improved by somehow estimating the similarity of error distributions among tasks (maybe via OTDD) and deciding if a current GMM can be reused with VSP or not.
Q4:
1:5 or 1:20 imbalance ratio might still be small. Depending what kinds of attack scenarios you are targeting, some scenarios would have significantly larger imbalance ratio. For network intrusions, from the title of your paper, 1:100 is not uncommon. Also, not sure why CIFAR-100 or MNIST is related to network intrusions.
---
Reply to Comment 1.1.1:
Title: Q2 and Q3: why the occurrence of task dissimilarity between two different tasks is rare in our setting?
Comment: **Response:**
Here, we will demonstrate that the error distributions between different tasks remain similar even with varying task orders, and it can be modeled using two-component GMM. Additionally, we will illustrate cases where the dissimilarity in the error distribution occurs due to two dissimilar tasks is rare in the context of network intrusion detection systems (NIDS).
**A two-component GMM is capable enough for various task orders**
We direct the reviewer's attention to Section A.2.2 and Tables 5 and 6 of the supplementary material. In these sections, we conducted experiments using various datasets with four distinct task orders, each having a different first task. The performance results for each task order are presented in Table 6, and Table 5 illustrates the variance in these reported values. Notably, we observe that the variance of performance results falls within the range of $10^{-7}$ to $10^{-4}$. This observation indicates that the influence of the different first tasks used in the computation of the Gaussian Mixture Model (GMM) on the performance of PAPA is minimal. This underscores that the GMM constructed based on the initial error distribution (ED) effectively approximates the ED of subsequent tasks, irrespective of the choice of the first task.
**Why the occurrence of two dissimilar tasks in a NIDS setting is rare?**
We believe some clarification is needed here. Firstly, we want to clarify that the OTDD values reported in the table of the previous response related to the MNIST+CIFAR-10 experiment are not between two adjacent tasks. Instead, for a task (say t1), this value reported in the table is the mean OTDD value computed with other remaining tasks (t2,t3,...,t9) as follows.
$$Mean.OTDD(t_{1}) =\frac{1}{8} \Sigma_{i=2}^{9} OTDD (t_{1},t_{i})$$
We explicitly show these values to showcase an extreme case where the proposed PAPA method would not perform well. However, in practice finding these values is only possible if access to past and future tasks is allowed, but in the continual learning setting, such access is restricted. On the other hand, computing the OTDD values between two adjacent tasks will not help determine the higher similarity between the two tasks; refer to the table 1 below for an example.
Table 1: OTDD values computed between two adjacent tasks
| Experiment | (0,1) | (1,2) | (2,3) | (3,4) | (4,5) | (5,6) | (6,7) | (7,8) | Mean± std |
| - | - | - | - | - | - | - | - | - | - |
| MNIST + CIFAR-10 | 424.29 | 404.25 | 405.68 | 405.70 | 409.09 | 409.09 | 407.59 | 407.59 | 409.16 ±4.399 |
The reason for calling the MNIST+CIFAR-10 experiment an extreme case is based on the following two observations.
1) In this experiment, training data arrives from two different data sources in the domain incremental setting with differing characteristics (size, image type, etc. refer to Table 2 below). However, in practice, finding a use case where the experiments with such a shift between tasks with different characteristics (switching between grayscale and RGB images learning sequentially) is difficult.
Table2: characteristics of the MNIST and CIFAR-10 datasets
|Dataset|Size|No of channels|Image type|
| - | - | - | - |
|MNIST|28 X 28|1|Grayscale|
|CIFAR-10|32 X 32|3|RGB |
2) The Anoshift dataset contains diverse traffic from five different networks [Ref1] of Kyoto University spanning ten years. Anoshift is the challenging dataset in network intrusion detection experiments, as also pointed out by the **reviewer ZBnA** (that contains data points from different networks) compared to other datasets (created over a short time on a single network, refer to Table 3 for mean OTDD values in each dataset, in which OTDD value is computed between adjacent tasks). For example, consider the datasets like CICIDS2017 and CICIDS2018, where datasets were framed using the traffic from a single network in which a high distribution shift may not happen. On the other hand, our proposed PAPA on the challenging Anoshift dataset (with diverse traffic) works better compared to baseline methods
Table 3: Mean OTDD values each task’s mean OTDD value computed with other remaining tasks
| Dataset | Mean OTTD values of all the tasks |
| - | -|
| CICIDS2017 | 0.1422 ±0.0439 |
| CICIDS2018 | 0.05635 ±0.0239 |
| ANoshift | 5,611 ± 861.54 |
To conclude, our intention in framing MNIST+CIFAR-10 experiments is to showcase the extreme case where the proposed approach is expected to not perform well in domain incremental learning. However, encountering such a scenario is rare in the NIDS setting.
[Ref1]. Song, J., Takakura, H., Okabe, Y., Eto, M., Inoue, D., & Nakao, K. (2011, April). Statistical analysis of honeypot data and building of Kyoto 2006+ dataset for NIDS evaluation. In Proceedings of the first workshop on building analysis datasets and gathering experience returns for security (pp. 29-36).
---
Rebuttal 2:
Title: Check the rebuttal
Comment: @Reviewer KabC,
Does the rebuttal address your concerns? | Summary: This paper improves upon existing memory replay-based continual learning methods for anomaly detection. First the authors extend class balancing reservoir sampling (CBRS) and develop ECBRS, using global information in order to keep more accurate information about class imbalance. Second, the authors proposed a perturbation-assisted parameter approximation (PAPA) method for estimating virtual SGD parameter (VSP) updates, resulting in reduced training time for methods like maximally interfered retrieval (MIR). The proposed methods are evaluated on intrusion detection, computer vision, and anomaly detection datasets, and shown to increase performance while reducing training time.
Strengths: This paper relies on simple heuristics that are shown to be quite effective for improving upon existing CL methods. The authors have also done extensive evaluation of their methods using several datasets, including ablation studies. In addition to performance improvements, the proposed methods can also reduce training times, resulting in better scalability.
Weaknesses: - Since the paper heavily relies on CBRS and MIR, I suggest that the authors briefly describe these methods to make the paper more standalone and to make its novelties more clear.
- While the proposed methods are sound, I am not sure if the utilized evaluation datasets can fully support the paper's claims. The paper mainly focuses on learning from data with distribution shifts, however all but one of the evaluated datasets correspond to a short time period with little to no distribution shift.
- Related to the above, I am not sure if the inclusion of image datasets is needed as the proposed methods are presented for network intrusion datasets. The authors mention that they include image dataset to evaluate learning on data with distribution shifts, but I am not sure if image datasets contain any distribution shift.
- The authors should also include the standard deviation of AUC scores in Tables 2, 3.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Do the authors have an idea on why PAPA achieves better performance as compared to MIR in Table 3? Since PAPA performs an approximation, I would have expected the AUC scores to be slightly lower than those for MIR.
- Do the authors have an insight as to why the error between regular/virtual parameter updates resembles a two-component GMM? Is it reasonable to expect this assumption to hold on other datasets?
- What is the reason for including image datasets for evaluation. If the reason is distribution shifts, can you state why you expect these datasets to contain distribution shifts?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1) Do the authors have an idea on why PAPA achieves better performance as compared to MIR in Table 3? Since PAPA performs an approximation, I would have expected the AUC scores to be slightly lower than those for MIR**.
R1). We sincerely thank the reviewer for bringing up this important observation. After your feedback, we thoroughly investigated the issue and successfully identified the root cause. The problem arose from using **drop_last=False** in the data loader for specific datasets. We have addressed this issue and re-conducted the experiments. As a result of the fix, we now observe consistent and uniform behavior across all the datasets. The affected datasets, namely NSL-KDD, CICIDS2017, ANOSHIFT, and CIFAR-100, have all been rectified. Additionally, we noticed a minor discrepancy in the results of SVHN. However, this discrepancy is insignificant, as it only appears at the third decimal position and can be attributed to noise in the training process, which we can safely ignore. (refer to table below). We appreciate your diligence in reviewing our work, and your feedback has helped us improve the accuracy and reliability of the results presented in the paper.
|Dataset|Method|PR-AUC(O)|PR-AUC (I)|ROC-AUC| Train time |
|-|-|-|-|-|-|
|NSL-KDD|MIR+ECBRS|0.964±0.009|0.971±0.003|0.97±0.004|27.86|
||PAPA|0.961±0.017|0.968±0.007|0.969±0.006|20.75|
|CICIDS2017|MIR+ECBRS|0.994±0.001|0.993±0.000|0.993±0.000|398.4|
||PAPA|0.994±0.001|0.992±0.003|0.993± 0.003| 239.8|
|ANOSHIFT|MIR+ECBRS|0.944±0.004|0.926±0.011|0.934±0.009|1273.4|
||PAPA|0.947±0.005| 0.927±0.011| 0.934±0.007|982.2|
|CIFAR-100|MIR+ECBRS|0.882±0.011|0.864±0.014| 0.871±0.013| 227.4|
||PAPA|0.877±0.011|0.861±0.012|0.870±0.010|119.8|
**Q2.Do the authors have an insight as to why the error between regular/virtual parameter updates resembles a two-component GMM? Is it reasonable to expect this assumption to hold on other datasets?**
R2: As mentioned previously, incorporating a two-component Gaussian Mixture Model (GMM) has emerged as one of our key empirical findings. In this context, we aimed to shed light on the fundamental nature of the two GMM components from the perspective of task similarity. The computation of task similarity relies on the optimal transport dataset distance (OTDD)[2].
We designed an experiment to learn tasks sequentially from MNIST and CIFAR-10 during this study. Before initiating the experiments, we calculated each task's mean OTDD[2] value, considering its relation to the remaining tasks. However, these values are not uniformly distributed compared to the experiments conducted solely using the CIFAR-10 dataset. This discrepancy suggests a higher dissimilarity within these experiments.
| Dataset | Task1 | Task2 | Task3 | Task4 | Task5 | Task6 | Task7 | Task8 | Task9 | Avg OTDD value across all tasks(Mean±std )|
|-|-|-|-|-|-|-|-|-|-|-|
| CIFAR-10| 258| 255| 220| 243| 212| 241| 225|236| 244| 237.2298±10.067|
| MNIST+CIFAR10 | 310| **448**|310|**444**|310|**456**|310|**470**|310|374.179±**50.024**|
Furthermore, during these experiments, we observed a significant disparity in the performance values between the MIR and PAPA algorithms. Notably, all these experiments are performed using 20 distinct task orders, and the reported results encompass both mean and standard deviation values.
| Algorithm | PR-AUC (O)| PR-AUC (I)| ROC-AUC|
|-|-|-|-|
| MIR| **0.675 ± 0.053** |**0.700 ± 0.030** | **0.657 ±0.040**|
| PAPA| 0.645 ± 0.047| 0.661 ± 0.099 | 0.628± 0.080|
Additionally, we aimed to validate this observation through a computational approach empirically. In this pursuit, after conducting experiments using the MIR algorithm, we calculated error values for the parameters and endeavored to fit the Gaussian Kernel Density Estimator (KDE). However, we encountered a singular matrix error while attempting the KDE fit, preventing us from generating a Gaussian approximation. It's noteworthy that this issue doesn't arise in other experimental scenarios.
From this experience, we can deduce that the two-Gaussian Mixture Model is feasible whenever the variance in the mean OTDD value of each task concerning the remaining tasks is low.
**Q3: What is the reason for including image datasets for evaluation? If the reason is distribution shifts, can you state why you expect these datasets to contain distribution shifts?**
R3: ANOSHIFT captures long-term distribution shifts (DS), but many NID benchmarks lack concept drift [1]. Without relevant cybersecurity benchmarks, the proposed methods were tested on computer vision benchmarks and modified to mimic the behavior of the IDS benchmark. The ease of visualizing distribution shifts makes CV benchmarks suitable for validating the proposed approaches. These DS in image classification can also be quantified using optimal transport dataset distance (OTDD)[2] computed between two tasks, where higher OTDD values indicate greater distribution shifts. For instance, on CIFAR-10 and CIFAR-100 datasets, mean OTDD values are 205 and 3139, respectively
|Dataset| Avg.OTDD values across all the tasks|
|-|-|
|CICIDS2018|0.0508|
|CICIDS2017| 0.1527|
|CTU-13|4.44|
|CIFAR-10|231|
|CIFAR-100|3139|
| ANoshift|7189|
[1] Dragoi, M., Burceanu, E., Haller, E., Manolache, A., & Brad, F. (2022). AnoShift: A distribution shift benchmark for unsupervised anomaly detection. Advances in NIPS, 35, 32854-32867.
[2]Alvarez-Melis, D., & Fusi, N. (2020). Geometric dataset distances via optimal transport. Advances in NIPS, 33, 21428-21439.
**Weakness**
*Please read R1 as a response to weakness one identified by the reviewer*
**R1**) We will add more description about CBRS in supplementary material
**R4**) We observe that the std for most experiments across all the datasets falls in the (0.001 to 0.010) range, implying low variance. So, we exclude them in the main paper. Considering your feedback, we agree to add them to the final paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and the additional tests. I especially appreciate investigating and addressing my first question and agreeing to add more details about CBRS.
---
Reply to Comment 1.1.1:
Title: Thanking reviewer wTtL
Comment: Dear reviewer **wTtL**,
We thank you for providing **constructive feedback**, especially the **AUC comparison** between the MIR and PAPA. This helped in maintaining the accuracy and reliability of the results. We agree to incorporate all your suggestions in the final version of this paper. | null | null | Rebuttal 1:
Rebuttal: This pdf contains a table1 representing the results corresponding to the question (Q4) raised by the reviewer **KabC**. Specifically, table results are obtained on CIFAR-100 and CLEAR-10 datasets. Another table (table2) contained the results on newer datasets like CTU-13, modified CICIDS-2017 and CICIDS-2018 as suggested by the reviewer **ZBnA**. Additionally, we also include our responses related to weakness identified by the reviewer **ZBnA**
Pdf: /pdf/3fbe94300e0ef9f8772a8101aea51a7bdd38957d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency | Accept (spotlight) | Summary: This paper introduces a novel method to train an interpretable surrogate of pre-trained time series classification models. TimeX produces a latent embedding of time series observation and outputs classification probabilities that are both consistent with a reference model. At the same time, it identifies time localized patterns in the time series that allowed it to make predictions. Last, it learns landmarks in the latent space that allow the user to easily compare the observed time series.
From pre-trained encoder $G$ and decoder $F$, TimeX trains an explanation generator $H_E$, encoder $G_E$ and decoder $F_E$. $H_E$ produces a discrete masking function that allows to localize and consider only sparse yet connected regions of interest in the input time series. $G_E$ and $F_E$ are jointly trained to respectively preserve the "topology" of the latent spaces and the output distributions.
Please note that the submission is not in my area of expertise. I can assess the soundness of the claims and the presentation but it is hard to evaluate its broader contribution to the field.
Strengths: The notion of Model Behavior Consistency is, from my understanding, the main contribution of this paper. Although many methods aim at embedding high dimensional observations in topology-preserving latent spaces, the idea of learning an interpretable embedding consistent with pre-trained models is novel to my knowledge. I find the approach particularly relevant as high-capacity models become more and more accessible.
Although the technical description of the method is not very well written (see below), each modeling choice is convincing and well motivated (each term of the *ad-hoc* loss is easily interpretable and the need for discrete masking for time series is well demonstrated).
The experimental results demonstrate improved performance that I cannot really evaluate (see above).
Weaknesses: 1) Some statements seem inaccurate if not incorrect. (I will be happy to be proven wrong, as I am not an expert in Time-Series Classification)
- line.22 "State-of-the-art time series models are high-capacity pre-trained neural networks" -> This might be true for time-series classification, but seems wrong in other areas like forecasting, where uncertainty quantification is key.
- There is an apparent conflict between the need for "pinpointing the specific location [...] that influence the model's prediction" (line 24) and the fact that "temporal patterns only show up when looking at time segments and long term trends" (line 54). For example, if the feature of interest is the frequency content of a time series, it seems pointless to precisely localize it in time. From my understanding, TimeX reaches an in-between suitable to pinpoint patterns "broadly-localized" in time. The paper would gain in clarity by stating more clearly what the method is trying to achieve.
- line 47. "the uninterpretable nature of time series data". I would avoid such vague statements. For example, the QRS intervals subsequently analyzed are straightforwardly interpretable and corresponds to ventricles depolarization and heart contraction.
2) The description of the method is not intelligible but could easily be improved.
- line 154. The functions $D_Z$ and $D_Z^E$ are used to motivate Model Behavior Consistency, but it is not explained that they convey a notion of distance in the latent space (this only shows up line 241)
- Although I understand the relationship $D_Z(z_i, z_j) \approx D_Z^E(z_i^E, z_j^E)$ (line 154) and its implication, it is very confusing to write $z_j^E = G^E(E(x_i))$ (line 153). $z_j^E$ is a latent representation of the masked time series $x^m_j$. And $E$ is defined above as explanation that takes value in $[0, 1]$ (see line 189). Did I miss something ?
- The notation $H^E: \mathcal{X} \rightarrow p \in [0, 1]^{T \times d}$ does not make sense (line 184). Is $\mathcal{X}$ the input space of the function $H^E$ or its input ? In the former case, one should write $H^E: \mathcal{X} \rightarrow [0, 1]^{T \times d}$. In the latter case it implies that the mask applies uniformly to the whole dataset $\mathcal{X} = (x_i)_i$ which contradicts my intuition from Figure 2.
3) Figure 4 clearly illustrates what part of the ECG the model based its prediction on. Nevertheless, there is no further analysis of the discovered temporal patterns. Are they unexpected or are they stereotyped signals easily classifiable by a physician ? In the latter case, it would have been interesting to use an additional classification task and illustrate how TimeX can identify new temporal patterns in observations, and possibly be used for exploratory data analysis.
Minor points: line 91. XAI: abrevation not defined
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1) I did not understand how and exactly when the Landmarks are learned. Is it post-hoc ? Or is there some partial overlap with the training of the rest of the model which encourages the finding a parsimonious latent embedding ?
2) It is not really discussed how the two latent spaces dimensions are related. Since $G^E$ is initialized with $G$, I am assuming that they are the same, but I imagine that one might use a lower dimensional embedding space of explanation. It would be relatively straightforward to initialize $G^E:= P G$ where $P$ is a linear projection on a lower dimensional space. Would it make the model even more interpretable (and avoid post-hoc treatment with UMAP as in Figure 2) or would it penalize the model too much ?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors adequately addressed the limitations of their model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing valuable comments and critiques of our work. We have worked hard to improve the communication of our method, and we kindly ask you to raise your score. Please reach out to us with any questions.
### W1a: State-of-the-art models
Thank you for pointing out this misleading statement. Many variants of transformers, often very large models, have become preeminent models for various time series tasks. See recent works [1,2] for time series classification, [3,4] for forecasting, and [5] for anomaly detection. Additionally, uncertainty quantification methods have been developed for large neural networks used in forecasting [6]. We have revised our statement: “Prevailing time series models are often high-capacity pre-trained neural networks.”
### W1b: Modeling frequency content of time series
There is indeed value in identifying the specific location of a temporal pattern. For example, in physiological signal diagnosis via ECG or EEG, the exact time that a pattern occurred is of interest to practitioners [7]. TimeX is highly-effective in this setting, identifying the location of a temporal pattern while providing similar explanations through the learned landmarks.
However, predictions in time series can be driven by a variety of patterns, such as from the frequency domain. In these settings, TimeX’s explanations would be suboptimal as it cannot provide frequency-level explanations. However, baselines time series explainers (Dynamask and WinIT) would also fail in this case. While many time series predictors model the frequency domain [3], to the knowledge of the authors, no time series explainers have been developed with this capability. This desideratum is left for future work
### W1c: Uninterpretability of time series
Time series data can be visually uninterpretable as in noisy samples and long sequences can obstruct visualization of meaningful signals. Thus, one saliency map is not enough for interpreting time series data, unlike computer vision and natural language processing where explainers rely on human perception by overlaying saliency maps on images or tokens. This motivates landmark explanations to provide a mechanism to relate instances on the dataset. We agree that QRS intervals are indeed interpretable by experts, but this is a special case. Decades of research has focused on understanding ECG due to its importance in medicine, thus these data are readily interpretable.
We have edited this claim: “While explainers designed for other modalities can be adapted to time series, their inherent biases can miss important structures in time series and their reliance on isolated visual interpretability do not translate effectively to the time series where data are less-immediately interpretable.”
### W2a: D_Z and D_{Z^E}:
We have now moved our initial definition of $D_Z$ and $D_{Z^E}$ to Section 3 to make the problem formulation more readable.
### W2b: Notation of G^E:
We agree it is an abuse of notation to use $E$ to represent both the explanation and the masked version of the input sample. We have edited the text to use $x_j^m$ to denote the input to $G^E$.
### W2c: Notation of H^E:
Thank you for pointing this out, $H^E: \mathcal{X} \rightarrow [0,1]^{T \times d}$ is the proper notation. We have made this edit in text.
### W3: Discovered temporal patterns
The landmark explanations group together similar explanations that constitute temporal patterns across the dataset. It would be ideal to consult domain experts and conduct a user study to understand if the landmarks represent novel patterns. However, this cannot be conducted within the time allotted for rebuttals. Therefore, we provide a quantitative analysis in **Figure 4**.
To examine landmark quality on the ECG dataset, we compare two groups: 1) the landmark group, with the five nearest-neighbors around the top-10 filtered landmarks and 2) the random group, with five random explanations. We then compare the structure of the most salient values in the samples. We mask-in the top-10 time steps as identified by TimeX, and then compute DTW [8] distance between samples in each group. We then plot the distribution of average within-group DTW distances in **Figure R4**. The high quality of learned landmarks is demonstrated by a lower average DTW within their groups. We have added this analysis to the Appendix.
### Q1: How landmarks are learned:
Landmarks are learned during training. We do not backpropagate the gradients of the landmark explanation learning to the entirety of the TimeX model to prevent the landmark explanations from harming the performance of the explainer.
### Q2: Latent spaces:
$G$ and $G^E$ embed $Z$ and $Z^E$ respectively, but the inputs to both $G$ and $G^E$ are different—$\mathcal{X}$ and $\mathcal{X}^m$, respectively—as described in Section 4.2. This allows TimeX to optimize the mask $M_{\mathcal{X}}$ on $G^E$ rather than $G$, where $\mathcal{X}^m$ might be out-of-distribution from $\mathcal{X}$. Therefore, the formulation of $G^E := PG$ is too strict and might prevent the model from learning complex temporal patterns.
**References**: [1] Zerveas et.al., A transformer-based framework for multivariate time series representation learning. KDD 2021.
[2] Chowdhury et.al., TARNet: Taskaware reconstruction for time-series transformer. KDD 2022.
[3] Zhou et.al., FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. ICML 2022.
[4] Wu et.al., Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. NeurIPS 2021.
[5] Tuli et.al., TranAD: Deep transformer networks for anomaly detection in multivariate time series data. VLDB 2022.
[6] Wu et.al., Quantifying Uncertainty in Deep Spatiotemporal Forecasting. KDD 2021.
[7] Lundberg et al., Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. *Nat. Bio. Eng.* 2018.
[8] Muller, Dynamic time warping. Information retrieval for music and motion 2007.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. Their clarifications were insightful and important (at least to me) for the understanding of their paper.
Nevertheless, I still find it disappointing that although landmarks inference is presented as a key feature of the method, the paper does use them in any way.
I did not doubt that the landmarks were quantitatively good descriptors of the data (and the authors actually provided further evidence for that), but as they pointed out, "it would be ideal to consult domain experts and conduct a user study to understand if the landmarks represent novel patterns" or at least meaningful. It may be worth showing, at least, that the Landmarks are more stable, or more "interpretable" than simply applying K-Means on the latent space after fitting TimeX.
---
Reply to Comment 1.1.1:
Comment: Thanks for your continued feedback, we’re glad to hear that our clarifications were helpful and that your remaining concern is just with the landmarks.
Our main contribution is the TimeX method, not landmarks. As you note in your initial review, TimeX is highly-relevant, and our modeling choices are well-motivated. Further, our main results, presented in Tables 1, 2, and 3, along with Figure 4, all use the most-important metrics in our field. While conducting human studies is an exciting future step, it is out of scope for this work. Our intention is not to present landmarks as a key feature; we will address your comments by toning down our language and expanding the limitation section in the main manuscript.
Still, we address your lingering concerns about interpreting landmarks in 3 points:
1. Designing, conducting, and evaluating high-quality user studies is a major undertaking worthy of its own complete work. Doing these studies for explainability is important work, and it’s an active area of research on its own. Given that TimeX consistently outperforms state-of-the-art explainability methods, we hope it will be included in such studies.
2. We have now conducted a preliminary discussion with a medical doctor who frequently works with ECG data and with machine learning. In this conversation, the doctor said there is likely meaning in the landmarks in Figure 4: Series 1, 2, and 3 clearly have irregular beats, possibly related to long QT periods. 4, 5, and 6 are heterogenous, possibly related to variance in the ST segment. A more comprehensive user study done in collaboration with medical experts would, however, require considerably more time than what is possible in this relatively short discussion period. To address your question, we will make sure to clearly discuss user studies as future work and include references to sample human evaluation studies.
3. We have run an additional experiment comparing the clustering performance of our landmarks to those learned by k-means on the final latent space. On the ECG dataset, we match embeddings to their nearest landmarks, to their nearest k-means cluster centroids, and to randomly-selected points as a baseline. For each method, we use 50 clusters. We then evaluate each clustering method's Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI), which are standard clustering metrics, against the ground-truth labels and report standard error (+/-) computed over 5-fold cross validation. Higher metrics for one set of centroids would indicate that proximity is more meaningful with respect to the prediction task. The results are shown in the table below. Here we see higher NMI and ARI for landmarks, which means the landmarks are a better heuristic for forming clusters than the K-means centroids for this task.
| Clustering Method | NMI | ARI |
| :----: | :----: | :----: |
| Landmarks | 0.191 +/- 0.014 | 0.152 +/- 0.026 |
| K-means | 0.147 +/- 0.010 | 0.025 +/- 0.001 |
| Random | 0.109 +/- 0.030 | 0.027 +/- 0.042 | | Summary: This paper proposed TimeX that creates an interpretable surrogate model for pretrained models. To ensure faithfulness to the reference model, this paper introduces a self-supervised objective, model behavior consistency, a novel formulation that ensures the preservation of relationships in the latent space induced by the pretrained model, as well as the latent space induced by TimeX. As a result, TimeX could find well-distributed landmarks and highlight a range of salient timestamps.
Strengths: 1. It is well-written and has strong motivation.
2. This is the first paper to suggest in-hoc explanations for time series data.
3. This article offers additional justifications for its design decisions, including discrete masking and consistency learning.
4. The evaluation is very convincing, using diverse datasets and many compared methods.
Weaknesses: 1. No discussion about the quality of explanation. Please see questions.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Any metric for the quality of the explanation? For example, better explanation should show high IoU with ground truth. This kind of qualitative analysis would assure the superiority of TimeX.
2. If Table 3 is at the top of the page, the visibility would be improved.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation that authors provided is acceptable as there is no ground truth for other tasks except classification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the useful feedback about our work. We appreciate the reviewer’s acknowledgement of the novelty of our work, as well as the utility and diversity of our extensive evaluations. We have responded to your concerns in the below comments, running 1 additional experiment. We encourage you to reach out if there are any further questions or clarifications needed.
### W1 and Q1: Quality of explanations
We believe there is some confusion regarding our experiments. We use two primary setups to evaluate the quality of TimeX explanations - ground-truth comparison and ablation analysis. These are both common analyses in XAI literature, including time series XAI literature [1,2,3].
In the ground-truth comparison, we compare the explanations of TimeX and baselines to known temporal signals in each dataset that are known to drive the model prediction. We use four synthetic datasets and one real-world dataset (ECG) for this evaluation setup. We then use three separate metrics that compare **ground-truth explanations to explanations from each XAI method**. The area under the precision-recall curve (AUPRC), area under precision curve (AUP), and area under recall curve (AUR) are all metrics to evaluate the correspondence between explanations from each method and ground-truth explanations. These evaluations are described in detail throughout Section 5, 6, and Appendix C. Intersection over Union (IoU) measures the same qualities of explanations—correspondence to ground-truth explanations—as AUPRC, AUP, and AUR. Therefore, our chosen metrics in this setup measure the same qualities of explanations as the IoU metric.
| Method | IoU | AUPRC |
|---|---|---|
| Integrated Gradients | 0.3750 $\pm$ 0.0022 | 0.5760 $\pm$ 0.0022 |
| Dynamask | 0.2958 $\pm$ 0.0014 | 0.4421 $\pm$ 0.0016 |
| TimeX | 0.5214 $\pm$ 0.0019 | 0.7124 $\pm$ 0.0017 |
To illustrate this point, we include results in the table above of TimeX and two of the strongest baselines—Dynamask and Integrated Gradients—measuring the IoU score on the SeqCombSingle dataset. For comparison to our metrics, we include the AUPRC results for the same methods. The IoU metric has high correlation with the AUPRC metric, with each metric resulting in the same ranking of methods and TimeX achieving the highest metric.
The other setup we use to evaluate explanation quality is an occlusion analysis. In this case, we mask-out (or occlude) values predicted by the explainer to be un-important, and we measure the drop in classification performance for the predictor under these explanation-dependent perturbations. This allows us to evaluate on real-world datasets where no ground-truth explanation is available. We describe these experiments in Section 6, R2 and results are shown in in-text Figure 3.
We hope that this clears up some confusion about our evaluations. All of our experiments seek to evaluate the quality of explanations from each method. We have extensive evaluations in this regard, using 8 datasets with a combination of synthetic and real-world time series datasets. In addition, our Appendix includes additional analyses on a variety of time series classification architectures. We have included a brief experiment using IoU metrics to illustrate the point that our experiments capture similar qualities to that measured by the IoU score.
### Q2: Table 3
Thank you for this suggestion concerning the readability of our manuscript. In response, we have moved Table 3 to the top of the page.
**References**:
[1]: Agarwal et al., “Evaluating explainability for graph neural networks”, *Scientific Data*, 2023.
[2]: Crabbe and van der Schaar, “Explaining Time Series Predictions with Dynamic Masks,” ICML 2021.
[3]: Tonekaboni et al., “What went wrong and when? Instance-wise Feature Importance for Time-series Models”, NeurIPS 2019.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I appreciate detailed reply for my review. I do not have any question at this point. | Summary: This paper presents an in-hoc interpretability mechanism to explain time series prediction. In particular, the authors train an interpretable surrogate model by learning H^E and G^E in the embedding space.
The objective function optimizes model behavior consistency by considering the distance in the training embedding, Z, and explained embedding, Z^E.
Strengths: 1. Discrete masking as opposed to continuous masking
2. Landmark to partition latent space visualization and shape of the signal
3. Can be generalized to other classification tasks and neural model architectures
Weaknesses: As demonstrated by the ablation studies, the label alignment (LA) loss is always better than MBC in isolation.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Table 12 results in the Supplementary materials require an explanation of the SeqComb-MV dataset. Specifically, the MBC and LA loss in isolation has very poor AUP, ~0.0576. But the combined MBC + LA results in AUP 0.8326. The performance boost is really large.
Do the authors have any explanation for this behavior?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes, the authors explained use cases where their model is not the best.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for your constructive feedback about our work. We appreciate that you recognized the core contributions of our work, as well as the novelty of multiple components of TimeX, such as discrete masking, landmark explanations, and model behavior consistency learning. We respond to your concerns about our work in the following comment. Please reach out if you have any additional questions or clarifications about our work.
### W1 and Q1: LA vs. MBC Loss in Isolation
We thank you for noting this peculiar result in our work. The explanation for this behavior gets at a core of the motivation behind TimeX: faithful explainers should match multiple internal states of the model.
First, we recall an argument presented in Section 4.3, where we justify MBC. We remark that perturbation-based methods have a similar idea to TimeX: find some sparse perturbation to the input that can preserve the output state of the model. This is often done through observing the predicted label from the model under an applied mask or perturbation, e.g., in one of our baselines, Dynamask [1]. A perturbation that preserves the output state is said to be “faithful” to the model because it is assumed that the perturbation causes. In a sense, MBC generalizes this idea to latent spaces, ensuring that invariances are preserved on the latent space of the model as well as the prediction space.
Beyond the introduction of MBC alone, another core contribution of our work focuses on optimizing faithfulness to predictor models on multiple levels. We use multiple hidden or output states of the model, e.g., a latent space and a logit space, on which the explainable surrogate should match the reference predictor. The hypothesis behind this innovation is that model behavior (the exact objective we are trying to explain) cannot be fully captured by one state, e.g., a predicted label, but rather by multiple states throughout the model. A similar assumption is made in knowledge distillation, where methods often optimize student models to match the teacher on multiple layers of the network. **Therefore, MBC and LA together enforce adherence to model behavior on two fronts: the latent space and prediction space, respectively.** This explains the behavior that you mention in our ablation studies: MBC and LA perform poorly alone, but together, these two losses provide a powerful objective to train the model.
**References**:
[1] Crabbe and van der Schaar, “Explaining Time Series Predictions with Dynamic Masks,” ICML 2021.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed explanation. I do not have further queries at this point. | Summary: TimeX proposes a explanation module that is trained along-side the model to provide more consistent explainiations. This is done by making the internal embeddings of explaination modules consistent with that of full model such as distance consistency and label consistency.
Strengths: 1. The methodology is well motivated and presented.
2. The consistency losses encourage expaliation module features to learn from latent space of the model.
3. The synthetic experiments are well setup and model provides significant performance improvements in synthetic and real-world dataset.
Weaknesses: 1. The novelty of the method seems unclear. Model consistency has been previously explored in neural explainations and this work dovtails to time-series models.
2.What is the additional computational complexity or resources required to train the explainer modules? How does it compare to other methods?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper presents a model behaviour consistency based method of generate explainations for time-series models. While the results are very significant, computational requirements relative to other post-hoc methods and the performance vs compute tradeoff is not clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and criticism about our work. We greatly appreciate your claims that our work is “well motivated and presented” and that our explainer provides “significant performance improvements” over baseline explainers. We address concerns about novelty and computational efficiency in this response. If you feel our comments address your concerns, we kindly ask that you raise your score. Please reach out during the author-reviewer discussion if any questions are left unanswered.
### W1: Novelty
Thank you for this comment. Our method is novel in its formulation, specifically one of our central contributions, model behavior consistency (MBC), is a novel formulation for training surrogate explainers for neural networks in time series. MBC states that the surrogate model should match the original time series model through mimicking its latent space and predictions. This is a novel extension of previous perturbation-based methods which seek to find some perturbation on the original input that preserves the prediction of the model. Instead of only preserving the predictions of the original model, TimeX also seeks to preserve the structure of the original model’s latent space, which increases faithfulness to original model behavior. See Section 4.3 for a more formal justification of MBC. While consistency has been explored in previous XAI works, we are the first to consider consistency between the predictor model and the explainable surrogate model. To draw this distinction, we contrast our method to two notions of consistency: 1) **consistency between explanations** and 2) **consistency as an explainability metric**.
**Consistency between explanations**: This notion has been introduced in previous works in explainability literature. Pillai et al. [1] train a saliency explainer via contrastive learning that preserves consistency across the saliency maps for augmented versions of images. A few other works have explored maintaining consistency of explanations across various perturbations and augmentations, specifically in computer vision [2,3]. In one of the only previous works to consider explanation consistency in time series, Watson et al. [4] train an explainer on an ensemble of classifiers to optimize the consistency of explanations generated by an explainer applied to each individual classifier. TimeX does not seek to optimize simply consistency between explanations but rather consistency to the predictor model on which it is explaining.
**Consistency as an explainability metric**: Dasgupta et al. [5] defines explanation consistency as similar explanations corresponding to similar predictions; this metric is then used as a proxy to faithfulness to evaluate the quality of explainers. However, Dasgupta et al. uses the notion of consistency to evaluate explainers, not to train and design a new explainer method. TimeX uses consistency as a learning objective rather than simply a metric.
Our work differs from these previous formulations of explanation consistency. We seek to optimize the consistency not between explanations directly, as mentioned in previous works, but rather between the explainer and the model it is tasked with explaining. MBC attempts to ensure that the behavior of the explainable surrogate matches that of the original model. The definition of consistency in Dasgupta et al. is the closest to our definition of MBC; however, Dasgupta et al. does not seek to optimize the consistency of explainers but rather to evaluate the output of post-hoc explainers. TimeX directly optimizes the consistency between the surrogate model and the original predictor through the MBC loss, a novel formulation that seeks to increase the faithfulness of explanations generated by TimeX. Thus, we respectfully disagree with the sentence in your summary, which states that TimeX is trained to “provide more consistent explainiations [*sic*].” We have added these citations and arguments to the Appendix to make our distinction from previous work clearer.
### W2: Computational resources
Please reference the general response Point 3 for discussion on this point. We sincerely thank you for requesting this analysis as this enriches the evaluation for TimeX.
**References**:
[1] Pillai et al., “Consistent Explanations by Contrastive Learning”, CVPR 2021.
[2] Han et al., “Explanation Consistency Training: Facilitating Consistency-Based Semi-Supervised Learning with Interpretability”, AAAI 2021.
[3] Guo et al., “Visual attention consistency under image transforms for multi-label image classification”, CVPR 2019.
[4] Watson et al., “Using model explanations to guide deep learning models towards consistent explanations for EHR data”, Scientific Reports 2022.
[5] Dasgupta et al., “Framework for Evaluating Faithfulness of Local Explanations”, ICML 2022.
---
Rebuttal Comment 1.1:
Title: Following up on our rebuttal
Comment: Dear Reviewer Mf92,
Thank you again for your valuable feedback on our manuscript. We wanted to inquire if you had any additional comments or concerns regarding our response. We have carefully considered your feedback and have made the requested clarifications to improve the quality of the paper.
Since it comes to the end of the author-reviewer discussion period, we would be grateful if you could let us know if the revisions have addressed your concerns or if there are any remaining issues that need to be addressed.
Thank you for your attention to this matter. We look forward to hearing from you soon.
Best regards,
Authors | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for thoughtful and insightful feedback! We are pleased that reviewers are excited about the novel contributions of our work. Reviewers remark that TimeX **“provide[s] interpretable explanations that are both faithful to the model’s predictions and informative about the underlying temporal patterns”** [Uw18]. Reviewers are also impressed with our strong evaluation results, noting that **“the evaluation is very convincing, using diverse datasets and many compared methods”** [9pJd] and that TimeX **“provides significant performance improvements in synthetic and real-world dataset[s]”** [Mf92]. Finally, reviewers find our work **“particularly relevant as high capacity models become more … accessible”** [wJLT], noting how TimeX **“shows to be effective in explaining a variety of time series models”** [Uw18] and **“can be generalized to other classification tasks and neural model architectures”** [7JHq]. We thank the reviewers for the strong praise of our work and contributions.
We now highlight a few important points raised by reviewers that warrant inclusion in the general response:
### Highlights of New Experiments
In response to reviewers comments, we have run 6 additional experiments and generated 4 additional plots. We have included these experimental details and results in individual rebuttals, with an experiment on computational efficiency detailed here in the overall response. We have also attached a single PDF page containing the four figures and two tables we generated in response to reviewer comments. We now briefly describe these experiments and the figures/tables referenced in each:
* **Runtime experiment** [Uw18,Mf92]: We show that TimeX is much faster at inference time than time series explainer baselines. Results are presented in **Response Table 1**. Please see Point 3 in the general response below.
* **Irregular time series** [Uw18]: We show that TimeX can achieve high explanation performance on an irregular time series synthetic dataset. The table is shown in **Response Table 2**.
* **Forecasting** [Uw18]: We demonstrate that TimeX can be used to explain forecasting tasks. A visualization is shown in **Response Figure 1**.
* **Visualizing explanations across model architectures** [Uw18]: We show a visualization of TimeX explanations across the transformer, CNN, and LSTM architecture on the FreqShapes dataset. This visualization is shown in **Response Figure 2**.
* **Varying $r$ parameter** [Uw18]: We find that TimeX is robust to choices of the $r$ parameter in the explanation loss. A visualization of these results is shown in **Response Figure 3**.
* **Applying IoU metric** [9pJd]: We add a new metric, Intersection-over-Union (IoU), that corroborates TimeX’s high explanation performance. The results are presented in the rebuttal to Reviewer 9pJd.
* **Landmark analysis** [wJLT]: We show that the landmarks learned by TimeX are high-quality and capture regions of the latent space with high similarity. Results are visualized in **Response Figure 4**.
### Note about in-hoc vs. surrogate explainers
We mistakenly referred to TimeX as an “in-hoc predictor” in-text. We appreciate Reviewer Uw18 pointing out this error. While the result of the model is an end-to-end differentiable time series explainer, TimeX is fundamentally a surrogate explainer method. We have appropriately removed the claims of TimeX being an in-hoc method and have clarified that it is a surrogate method in-text.
### Computational runtime analysis
We thank Reviewer Uw18 and Reviewer Mf92 for requesting more analysis of the computational resources required for TimeX against baseline explainers. Table X shows the training and inference time in seconds of TimeX versus two state-of-the-art time series-specific baselines, Dynamask and WinIT. We choose two real-world time series datasets, PAM and Epilepsy, which are of varying sizes. PAM contains 4266 training samples and 534 testing samples, each of 600 time steps in length. Epilepsy contains 8280 training samples and 2300 testing samples, each of 178 time steps in length. **Table 1** shows the time in seconds needed to train each explainer and to perform inference on the testing set.
TimeX is by-far the most efficient model at inference time for both datasets. This result is expected, as Dynamask and WinIT both require iterative procedures for each sample at inference time, while TimeX requires only a forward pass of $H^E$ at inference. Combining training and inference time, TimeX is the second-fastest on both datasets. However, WinIT and Dynamask times vary greatly between each dataset, with Dynamask as the fastest on PAM while WinIT the fastest on Epilepsy. WinIT scales poorly to samples with many time steps while Dynamask scales poorly to large testing sets. TimeX strikes a compromise between these extremes, scaling better than Dynamask to large numbers of testing samples while scaling better than WinIT to long time series.
### General Notes to Reviewers
Every figure and table reference starting with “R#” refers to figures and tables in the PDF attached to the general response; to make this clear, we bold figure names such as **Figure R1**. We abbreviate weaknesses by “W#” and questions by “Q#”. Despite being barred from submitting a revised manuscript, we assure the reviewers that edits have been incorporated into our revisions. We also use several common acronyms throughout the responses:
* XAI: Explainable artificial intelligence
* MBC: Model behavior consistency (Section 4.3)
* LA: Label alignment
* AUPRC, AUP, AUR: Metrics mentioned in Appendix C.4
We thank all reviewers again for their thoughtful commentary. We worked hard to improve our paper, and we sincerely hope the reviewers find our responses informative and helpful. If you feel the responses have not addressed your concerns to motivate increasing your score, we would love to hear what points of concern remain and how we can improve our work. Thank you again!
Pdf: /pdf/29037e4aa3fc4d32a9de858b8d540d040f76a231.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper introduces a novel time series interpretability model called TIMEX. The challenge of interpreting time series models arises from the need to identify both the specific time series signals influencing predictions and their alignment with interpretable temporal patterns. TIMEX addresses the issue of model faithfulness by introducing model behavior consistency, ensuring relations in the latent space induced by the pretrained model are preserved when compared to relations in the TIMEX-induced latent space.
TIMEX provides discrete attribution maps, giving interpretable explanations for the model's predictions. Unlike existing methods, TIMEX goes further by learning a latent space of explanations, enabling visual aggregation of similar explanations and recognition of temporal patterns.
The evaluation of TIMEX on 4 synthetic and 4 real-world datasets, which includes case studies involving physiological time series, demonstrates its superior performance compared to state-of-the-art interpretability methods. TIMEX's innovative components hold promise for training interpretable models that capture the behavior of pretrained time series models.
Strengths: Overall the proposed method is novel, effective, and well-evaluated. The main strengths of the paper are as follows:
1. The paper proposes a novel method for explaining time series models that is based on self-supervised learning. This approach has several advantages over traditional explanation methods, such as being able to learn explanations for models with complex temporal dynamics. The paper provides a clear and concise overview of the problem of explaining time series models and provides a detailed description of the TimeX method, including the rationale behind the design choices.
2. The authors presents a comprehensive evaluation of TimeX on a variety of experiments with time series models and datasets. The results show that TimeX is able to provide interpretable explanations that are both faithful to the model's predictions and informative about the underlying temporal patterns.
3. The paper also includes a number of case studies that demonstrate the use of TimeX to explain the predictions of time series models in real-world applications. These experiments help highlighting TimeX's capability to be used for gaining insights into the behavior of time series models and to identify potential problems with the models.
4. The authors evaluate TimeX through an extensive set of ablations and on a variety of time series models. TimeX shows to be effective in explaining a variety of time series models, including LSTM, CNN, and vanilla-Transformer model. This suggests that TimeX is a general-purpose method for explaining time series models.
Weaknesses: The are a few weaknesses that if addressed or identified as a limitation for future can improve the paper:
1. As the current setup is presented, the model does not handle variable length time-series which is a common occurrence in medical time-series data. This could prove to be challenging to be extended from the current state of TimeX as aligning different time-series that have variable length would also become important.
2. Another common challenge of dealing with time-series data that is not addressed in the paper, is handling irregular time-step intervals. A discussion of this important limitation is missing in the paper. Irregularly sampled time-series is predominantly present in healthcare data that this paper focuses on and is important weakness of the TimeX that is not mentioned in the paper.
3. The method has not been evaluated on a wide range of different tasks. The authors of the paper evaluated TimeX on a variety of time series models and datasets, but they did not evaluate TimeX on a wide range of tasks. This means that it is not clear how well TimeX would perform on other tasks, such as anomaly detection or forecasting.
4. TimeX currently does not handle temporal prediction (i.e. a prediction task where prediction is done at every time-step), and it is unclear how it can be expanded to provide explanations for models designed for such tasks.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. In appendix, some of the explanations of the SeqComb-UV is visualized. The repeating patterns can all be seen to be happening around the same time points. Is TimeX capable of identifying patterns that are not necessarily aligned in absolute time between two patients?
2. Paper would be improved if there were additional experiments on different choices of $r$ presented, as at its current state, a reader unfamiliar with [46] will have a hard time deciphering the effect of different values of $r$.
3. Why is cosine similarity used in Equation 3? For SimCLR this would have made sense as the embedding of SimCLR is on the unit hyper-sphere and thus cosine similarity is an appropriate choice. But since the embeddings here are not constrained to be on the hyper-sphere I am uncertain why encoding through the notion of directionality would be helpful here.
4. Wouldn’t it be more appropriate to normalize $L_{MBC}$?
5. How does the differentiable attention masking presented in the appendix come into play for the model? Also this should be referring to section 4.2 instead of 4.1 as currently printed in the appendix.
6. Following the methods section, I am still uncertain as to why TIMEX is considered to be an in-hoc model and not a surrogate method. The paper would become easier to follow, if the authors can draw a parallel here and clear this uncertainty in the introduction.
7. The appendix currently provides an extensive list of experiments. One additional point to improve the already thorough analysis would be to visualize and see how much the explanations found differ for different underlying models tested (transformer, LSTM, CNN).
8. In the appendix the running time of the model is presented, but how does it compare to other baselines in your experiments?
Minor comments:
1. In section 3.1, $F^E$ should be defined as otherwise $F(z_i^E)$ would imply that $z_i^E$ and $z_i$ belong to the same space.
2. Line 213 has a typo: we [use] a direct-value masking procedure. ("use" is missing).
3. Specifics of how the transformer encoder-decoder was used should be added to appendix for reproducibility of results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper does not contain any discussion on the potential societal impact of their work. As interpretability opens a broad avenue for discussion about this topic, I believe this section is essential to be added to the paper.
Additionally, the current limitations section is quite coarse. The authors can improve the paper by better expanding on the limitations of TimeX at its current state and the interesting directions for the future works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your extensive comments, critiques, and praises for our work. We hope our response and experiments further convince you of TimeX’s novelty and effectiveness, and we kindly ask you to consider raising your score. Please reach out if there are additional questions.
### W1 and W2: Irregular time series
We introduce an irregularly sampled version of SeqComb-MV, randomly dropping an average of 15% of time steps. This results in variable-length time series, addressing both W1 and W2. We then follow Zhang et al. [1] using a time series transformer with an irregular attention mask. To avoid direct interference with this mechanism, we train TimeX only with direct-value masking.
**Table R2** shows the results of this experiment. We compare TimeX to Integrated Gradients (IG) performance because given the nuance of learning from irregularly-sampled datasets [1], most baselines do not apply to irregularly-sampled time series datasets without significant changes that are out-of-scope for our work. TimeX demonstrates superior performance in this setting, outperforming IG by an over 1.5x improvement in AUPRC.
### W3 and W4: TimeX on a wide range of tasks
We agree that TimeX can in principle be applied to a wide range of tasks. We focus on classification for close comparison to evaluations in prior works. Still, to address your concerns, we demonstrate TimeX’s generalizability to diverse tasks by explaining a forecasting model. We use the ETTh1 dataset [2] and a vanilla forecasting transformer. To modify TimeX for forecasting, we first extract embeddings used for MBC by max pooling over hidden states of the decoder. Next, we used a revised LA loss that used MSE as the distance function between predictions rather than JS divergence.
We show visualizations of two samples in **Figure R1**. Explanations are in the left column while forecasted time steps are in the right column. A few patterns emerge:
1. TimeX identifies late time steps as important for the forecast, an expected result for a dataset with small temporal variance.
2. **(a)**: the forecast is an increasing sequence, and TimeX identifies a region of an increasing sequence (time ~450-485). This suggests the model is using this increasing sequence as a reference for forecasting.
3. **(b)**: a sharp upward spike is forecasted at the beginning of the predicted window. Correspondingly, TimeX identifies a local minimum around time 260 and a sharp upward spike around time 475-490.
TimeX extracts meaningful explanations from forecasting datasets. Regarding anomaly detection and temporal prediction, TimeX could be transferred by similarly modifying the training procedure. Given the nuances of these individual tasks, we leave this for future work.
### Q1: SeqComb-UV visualization
The randomly-chosen samples for SeqComb-UV visualization have ground-truth explanations that fall in similar time points, but this is not true in general across our synthetic datasets. Each synthetic dataset contains different temporal dynamics that vary across time, and because of TimeX’s consistently-high performance, we are confident in its capability to identify important patterns in a wide variety of time series settings.
### Q2: Varying r parameter
We conduct an experiment where we vary the $r$ parameter and measure explanation quality. We use the SeqComb-UV dataset and hold all hyperparameters constant while varying the $r$ parameter. The result is visualized in **Figure R3**. Low $r$ values lead to a drop in explainer performance with respect to AUPRC and AUP. Importantly, for $r$ values above 0.4, the explainer performance is stable, suggesting that TimeX is robust to choice of $r$ value.
### Q3: Cosine similarity
In principle, Equation 3 can accept any distance function, but we choose cosine similarity because it is used in popular approaches such as SimCLR and InfoNCE [3,4].
### Q4: Normalizing $L_{MBC}$
We do indeed normalize $L_{MBC}$ during training. We have made an appropriate amendment to Section 4.3 to fix this error.
### Q5: Differentiable attention masking
We use the differentiable masking approach presented in Nguyen et al. [5] to ensure that we can learn the masks in TimeX in an end-to-end differentiable manner.
### Q6: In-hoc vs. surrogate method
Thank you for pointing out this error. We have adopted the term “surrogate” to refer to our method. Please reference Point 2 in the general response.
### Q7: Explanations from different models
We have generated **Figure R2** to show the explanations across different models on the FreqShapes dataset. Explanations are similar across models. TimeX and IG outputs appear similar, but TimeX has higher recall for important patterns, which is reflected in quantitative results.
### Q8: Runtime
Please see general response Point 3.
### Minor Comments
Thank you for pointing out several typos and errors in our submission. We have made these edits within the text.
### Limitations:
In response to your comment, we will provide a more detailed limitations section and a societal impacts section. Our societal impacts section will read: “Time series data spans domains like finance, healthcare, energy, and transportation. Enhanced interpretability can bolster decision-making and trust. While it aids in identifying systemic biases for fairer, inclusive systems, caution is paramount. Misinterpretations or over-reliance on automated insights can backfire, highlighting the need for human and algorithmic collaboration.”
**References**: [1] Zhang et al., “Graph-guided network for irregularly sampled multivariate time series”, ICLR 2022. [2] Zhou et al, “Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting”, AAAI 2021
[3] Chen et al., “A Simple Framework for Contrastive Learning of Visual Representations”, ICML 2020.
[4] van den Oord et al., “Representation Learning with Contrastive Predictive Coding”, arXiv 2018.
[5] Nguyen et al., “Differentiable window for dynamic local attention”, ACL 2020.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I want to express my sincere appreciation for your comprehensive rebuttal to my review of your paper. Your responses clarified several aspects of the paper and addressed the concerns I had raised effectively.
After carefully considering your responses and the additional information you provided to all the reviewers, I am convinced of the value and significance of your research. I believe that your paper makes a valuable contribution to the field. I commend your dedication to improving the paper and your willingness to engage in constructive dialogue. Your efforts have further strengthened my confidence in the quality and relevance of your work. For this reason I have increased my score to recommend accepting the paper for publication subject to including the additional experiments and explanations in the final version of the paper.
I have also increased my score for presentation (from 3 to 4) and confidence (from 3 to 4).
Lastly, I would highly suggest including the synthetic data experiments and a sample jupyter notebook of end-to-end training and extracting the landmarks in your final published code for reproducibility. | null | null | null | null | null | null |
Convergence of Adam Under Relaxed Assumptions | Accept (spotlight) | Summary: The paper removes the Lipschit gradient assumption for the adaptive SGD (ASGD), making the ASGD broader to wide applications. Under the weak assumption, the authors still proved the optimal convergence rate. Moreover, they propose a variance-reduced version with an accelerated complexity. The results are interesting and novel.
Strengths: 1. They further relaxed the ($L_0,L_1$) Lipschit gradient assumption, which is more relastic.
2. They develop a new analysis to show that ASGD can be convergent under the weak Lipschit gradient assumption but still with the same rate as existing works on ASGD. The results and techniques are novel.
3. A variance-reduced version of ASGD is proposed with provable acceleration.
4. The authors show that the rates of ASGD and its variance-reduced version is dimension-free under certain cases.
Weaknesses: 1. The proof of variance-reduced version is interesting but without any numerical demonstrations. If the authors proposed an accelerated algorithm, it is convening to present some numerics because the theory is not the focus now.
2. In the contribution part, the authors claim that they do not assume bounded gradients. This is because you assume the a.s. bounded noise. which has been studied by previous work [Li and Orabona, 2019].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: There are very recent works on using bounded variance noises [Matthew Faw et al. COLT 2022, Bohang Wang et al. COLT 2023]. Can the results in this paper be extended to that case?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: YES
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback! Below we will try to address the concerns and questions.
- First, we thank the reviewer for the suggestion about numerical demonstrations for VRAdam. We will consider empirically comparing the performances of Adam and the variance-reduced version of it in the revision.
- The assumption of bounded gradient is actually independent of that of bounded noise. Even in the deterministic setting with zero noise, previous analyses of Adam cited in our paper still require strong assumptions like bounded gradients. For the paper [Li and Orabona, 2019], they are studying AdaGrad which is different from Adam, and therefore not directly comparable to our results.
- The recent works [Matthew Faw et al. COLT 2022, Bohang Wang et al. COLT 2023] also study AdaGrad (or AdaGrad-Norm) instead of Adam. Based on our understanding, they actually rely on some properties of AdaGrad to relax the noise assumption to bounded variance. Our approach is general and can also be applied to get a convergence result for AdaGrad. However, it may not be as good as their analyses designed for AdaGrad (e.g., we may still require the bounded noise assumption for AdaGrad).
Note that the reviewer did not provide enough details for us to precisely identify the references in the comments. We assume they refer to the following three papers. Please let us know if not. We will also cite them in the revision.
[1] Li and Orabona. "On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes." AISTATS 2019
[2] Faw et al. COLT 2022. "The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance." COLT 2022
[3] Wang et al. "Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions." COLT 2023 | Summary: The paper proposes a new proof strategy for the convergence of Adam in the
non-convex setting. The new analysis relaxes the typical assumptions
in the following ways: 1) it assumes relaxed smoothness, where the norm of the Hessian
grows sub-quadratically with the gradient norm; 2) it does not require
bounded gradients. With a deterministic gradient oracle, the obtained rate is $O(1/\epsilon^2)$.
For the stochastic setting, the author shows a convergence rate of $O(1/\epsilon^4)$
with high probability.
Furthermore, the author proposes a variance-reduced version of Adam and proves
a rate of $O(1/\epsilon^3)$. The above rates in stochastic setting is dimension-dependent
and can be dimension-free if the Hessian norm is sub-affine on the gradient norm.
The main limitation is that although the bounded gradient assumption is removed,
the bounded noise is still needed.
Strengths: 1. The paper is well-written and easy to follow. Specifically, the technical
challenges and the intuition behind the new analysis are discussed comprehensively.
2. Removing the bounded gradient assumption is generally challenging. Although
there are a few works for AdaGrad that address this, existing work targeting
the removal of this assumption for Adam only converges to a neighborhood of
stationary points.
3. The paper considers the smoothness condition where the norm of the
Hessian is bounded by a sub-quadratic function of the gradient norm, which
is more general than $(L_0, L_1)$-smoothness.
Weaknesses: 1. Although the work relaxes the bounded stochastic gradient assumption
commonly required in the analysis of Adam, assuming almost surely bounded
noise is still somewhat strong and impractical. Since the proof technique,
the contradiction argument, heavily relies on this assumption, it is
unclear whether it can be used in more realistic settings.
2. It is difficult to assess the usefulness of the proposed variance reduction method
since the convergence result does not improve upon previous methods and there
is no experimental evidence.
Minor:
The result presented for the deterministic setting is not formal. It would be
better to have a formal theorem, at least in the appendix.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. In Theorem 4.1, the same value of $\beta_{sp}$ and $\beta$ are selected,
which differ from conventional choice of $(1 - \beta)^2 < 1 - \beta_{sp}$.
Could the author please explain the reason behind this?
2. Why do we have a dependence on $d$ in the convergence result when $\rho \geq 1$
and suddenly become dimension-free when $\rho < 1$? Also, some results for Adam
assume the infinity norm of the gradient is upper bounded by a constant [1], which
natural leads to dimension dependence. It seems that by assuming bounded 2-norm, the
results become dimension-free [2, 3]. Could the author provide further elaborate on this matter?
3. It seems that if we have access to problem-dependent parameters and the initial conditions (e.g., the gradient norm at the first iteration), SGD can also converge with the optimal rate [4]. What is the benefit of adaptive methods (if they also require tuning) compared to SGD in the more generalized smoothness setting?
References:
1. Défossez, A., Bottou, L., Bach, F., & Usunier, N. (2020). A simple convergence proof of adam and adagrad. arXiv preprint arXiv:2003.02395.
2. Chen, X., Liu, S., Sun, R., & Hong, M. (2018). On the convergence of a class of adam-type algorithms for non-convex optimization. arXiv preprint arXiv:1808.02941.
3. Guo, Z., Xu, Y., Yin, W., Jin, R., & Yang, T. (2021). A novel convergence analysis for algorithms of the adam family and beyond. arXiv preprint arXiv:2104.14840.
4. Li, H., Qian, J., Tian, Y., Rakhlin, A., & Jadbabaie, A. (2023). Convex and Non-Convex Optimization under Generalized Smoothness. arXiv preprint arXiv:2306.01264.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful feedback! Below we will try to address the concerns and questions in the comments.
**For the weaknesses:**
- Regarding the bounded noise assumption, we want to first clarify that, it is not hard to generalize it to sub-Gaussian noise, as we discussed around Line 165. The contradiction argument is not really the bottleneck of further generalizing it to more heavy-tailed noise like noise with bounded variance. The challenging part is that, when noise is heavy-tailed, there is an unignorable probability that $v_{t}$ becomes very large due to a very large noise in the gradient, which may result in a small learning rate and slow convergence. If this challenge can be overcome, we can replace the contradiction argument with something similar to the proof of convergence of VRAdam to make it work.
- We also thank the reviewer for the suggestions on adding experimental results for VRAdam and formal statement in the deterministic setting and will consider adding them in the revision.
**For the questions:**
- 1. We choose $\beta=\beta_{sq}$ because it guarantees $\hat{v_t}\succeq\hat{m_t^2}$ (where "$\succeq$" denotes coordinate-wise inequality, see Proof of Lemma 5.2 for why it holds), which gives a uniform bound on the update $\\|x_{t+1}-x_t\\|$ and makes our analysis easier. Also, based on our preliminary experiments, Adam is not very sensitive to the momentum parameters (see Figure 1 in the PDF file attached to our global rebuttal to all reviewers).
- 2. Roughly speaking, the difference is due to how we upper bound the stepsize $\frac{\eta}{\sqrt{\hat{v_t}}+\lambda}$. For $\rho<1$, the problem is relatively more smooth, and we can simply use $\frac{\eta}{\sqrt{\hat{v_t}}+\lambda}\le \frac{\eta}{\lambda}$, which to some extent makes the analysis a bit similar to SGD whose rate is dimension-free. When $\rho\ge1$, we need to also consider the $\sqrt{\hat{v_t}}$ term in the denominator and use it to bound ${\\|x_{t+1}-x_t\\|}\_{\infty}\le \eta$ as we discussed in the response to your previous question, which implies $\\|x_{t+1}-x_t\\|\_{2} \le \eta \sqrt{d}$, and gives a dimension-dependent rate. We think what we discussed here also at least partially explains why the rates in related works mentioned by the reviewer are dimension-free or dimension-dependent. Finally, we want to mention that we have some ideas on obtaining a dimension-free rate for $\rho\ge1$ with a more careful analysis and will try to make it formal in the revision.
- 3. Since SGD already obtains an optimal rate, our results can not theoretically explain the advantage of Adam over SGD observed in practice. We will leave it as an interesting and important future work to find a reasonable condition where Adam is provably better than SGD.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my questions. I would like to increase my score.
I would appreciate it if the author could elaborate more the bounded noise assumption. Regarding the response "The challenging part is that, when noise is heavy-tailed, there is an unignorable probability that $v_t$ becomes very large due to a very large noise in the gradient, which may result in a small learning rate and slow convergence.", recent works for AdaGrad have successfully removed the need for bounded gradients and noise [1, 2]. Specifically, [2] also considers $(L_0, L_1)$-smoothness. Similar to Adam, AdaGrad may also have a large $v_t$ if there is a very large noise in the gradient. I understand the analysis for Adam would be different, but maybe some techniques from these recent works could help removing the bounded noise assumption?
**References**
[1] Faw, Matthew, et al. "The power of adaptivity in sgd: Self-tuning step sizes with unbounded gradients and affine variance." COLT. 2022.
[2] Wang, Bohan, et al. "Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions." COLT. 2023.
---
Reply to Comment 1.1.1:
Title: Thanks for the reply
Comment: Thanks for the reply and further comments.
We do agree that it is possible that some of the techniques from the two recent works mentioned by the reviewer could help to relax the bounded noise assumption. However, we do not think it is straightforward. One important difference between AdaGrad and Adam is that $v_t$ is non-decreasing for the former but not for the latter. In fact, [2] does rely on such property in their analysis. For example, below is a paragraph taken from their paper (page 3)
*"In this paper, we propose a novel auxiliary function $\xi(t)=\frac{\nabla f(w_t)}{\sqrt{v_t}}$ for the convergence analysis of AdaGrad(-Norm), and show the error term can be bounded by $\mathbb{E}^{|\mathcal{F}_t}[\xi(t-1)-\xi(t)]$ (c.f. Lemma 4), which can be reduced by telescoping. As explained in Section 3, such an auxiliary function is rooted in the **non-increasing nature** of the adaptive learning rate $\frac{\eta}{\sqrt{v_t}}$.''*
Therefore, we leave it as an interesting and potentially challenging future work. | Summary: This paper studies the convergence of Adam over non-convex objectives. To begin with, this paper proposes a new non-uniform smoothness condition called $(\rho, L_0,L_1)$ smoothness condition, which generalizes $(L_0,L_1)$ smoothness condition proposed in [Zhang et al. 2019]. The authors then prove the high-probability convergence rate $\mathcal{O}(1/\sqrt{T})$ of Adam under $(\rho, L_0,L_1)$ smoothness condition and affine noise variance assumption. The authors then propose Variance Reduced Adam (VRAdam) by combining the gradient estimation of STORM into Adam, and derive the convergence rate $\mathcal{O}(1/\sqrt[3]{T^2})$ of VRAdam.
Strengths: 1. This paper provides the first $O(1/\sqrt{T})$ convergence rate of Adam without the bounded gradient assumption.
2. The constructed stopping time is interesting and can be of independent interest.
Weaknesses: 1. Although this paper provides the first $O(1/\sqrt{T})$ convergence rate of Adam without the bounded gradient assumption, the analysis is somewhat restricted because it requires a non-zero $\lambda$ and the convergence rate has a polynomial dependence over $1/\lambda$. However, in the previous analyses of Adam including [Défossez et al., 2020; Zhang et al., 2022, Wang et al., 2022], it is allowed that $\lambda=0$ or the rate has a logarithmic dependence over $1/\lambda$. I treat this as a weakness of this paper because, in practice, $\lambda$ is set to very close to $0$ (for example, $10^{-8}$ as the default value in PyTorch) and may result in very loose bound.
2. As mentioned in this paper, the assumed noise condition is still stronger than the assumptions used for the analysis of other optimizers, for example, affine variance assumption.
**Minor Issue**:
The statement with respect to [Zhang et al., 2022; Wang et al., 2022] in line 28 is not inproper. In fact, if you pick $\beta_{sq}$ according to $\epsilon$ in [Zhang et al., 2022; Wang et al., 2022] just as this paper does, you can derive the convergence to stationary points (but with a slower rate). Therefore, a proper statement should be "but the convergence rate of [Zhang et al., 2022; Wang et al., 2022] is slower".
**References**
Defossez et al., A Simple Convergence Proof of Adam and Adagrad, 2019
Zhang et al., Adam Can Converge Without Any Modification On Update Rules, 2022
Wang et al., Provable adaptivity in Adam, 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I will consider increasing my score according to the responses to the questions.
1. Although the studied smoothness assumption is more general, what is the motivation to study it? Specifically, according to [Zhang et al. 2019], $(L_0,L_1)$ smoothness condition seems to be enough to capture the training process of neural networks.
2. The convergence rate has a dependence on the initial gradient norm $\Vert \nabla f(x_1) \Vert $. Does this mean that Adam is sensitive to the initialization? I expect some explanations here.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. We will try to address the concerns and questions of the reviewer below.
**For the weaknesses:**
- First, regarding the dependence on $\lambda$, although it is worse than that in the papers mentioned by the reviewer, a non-zero $\lambda$ allows us to get a dimension-free rate while the existing papers mentioned by the reviewer obtain dimension-dependent rates. Given that the dimension in neural networks are usually large (especially for large language models), we think depending on $\lambda$ is at least better than depending on the dimension, because 1) $\lambda$ is a scalar hyper-parameter which is much easier to tune than the dimension; and 2) Adam is not very sensitive to $\lambda$, and a reasonably larger $\lambda$ does not make the performance worse, based on our preliminary experiments (see Figure 2 in the PDF file attached to our global rebuttal to all reviewers).
- Second, regarding the noise assumption, we will leave the relaxation of it as an interesting and important future work. We also thank the reviewer for pointing out our improper statement about [Zhang et al., 2022; Wang et al., 2022] in Line 28, and will make it more precise in the revision.
**For the two questions:**
- 1. Regarding the motivation of our paper, note that Figure 1 in [Zhang et al. 2019] shows that $\log$(Hessian norm) is roughly a linear function of $\log$(gradient norm), whose slope is actually a bit larger than 1, which means Hessian norm should be a polynomial function of the gradient norm, instead of an affine function. Therefore, we believe our $(\rho,L_0,L_\rho)$ smoothness condition can better capture such relationship than the $(L_0,L_1)$ smoothness.
- 2. Regarding the dependence on the initial gradient norm, we think it is from our analysis approach instead of the Adam algorithm , and thus does not suggest Adam is more sensitive to initialization. We want to also point out that, such dependence is not a bad thing for the following two reasons. First, for neural network training, the initial gradient norm is usually a numerical constant if you apply e.g. Kaiming initialization. Second, the initial gradient norm can be bounded by the initial sub-optimality gap $f(x_1)-f^*$ (Consider the reverse-PL inequality $\\|\\nabla f(x)\\|^2\le 2L(f(x)-f^*)$ for classical $L$-smooth functions for example, which can actually be extended to our generalized smooth function). so the dependence on $\\|\nabla f(x_1)\\|$ can be viewed as dependence on $f(x_1)-f^*$. Most convergence results in the optimization literature do depend on $f(x_1)-f^*$.
---
Rebuttal Comment 1.1:
Comment: My concerns are addressed by the authors' rebuttal, and I increase my score to 6 as promised. | Summary: This paper studies the convergence of the Adam algorithm. Under a more general local smoothness assumption, the convergence of Adam to stationary points is proved without assuming boundedness of the loss gradient. Here the key technique is to show that the loss gradient along the trajectory is indeed bounded, using a proof by contradiction. A variance-reduced variant of Adam is proposed to achieve an accelerated gradient complexity.
Strengths: The convergence of Adam is definitely an important question, and this paper seems to be a significant contribution by relaxing the bounded gradient assumption. The argument of bounding the gradients along the optimization trajectory looks neat and easy to understand. Overall the result of the paper is solid and novel, and the writing is also easy to follow.
Weaknesses: It seems that the proof doesn't suggest the benefit of the momentum. It would be helpful if the authors can comment on this and provide some insights on the theoretical understanding of the momentum term.
It would also be helpful if the authors can summarize the recent results on Adam convergence in the form of a table.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Under the current local smoothness condition, how do we compare the performance of Adam and other algorithms like SGD? Does the current result suggest any advantage of Adam?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback! Below we will try to address the concerns and questions of the reviewer.
Although our analysis of Adam relaxes the assumptions made in previous papers, it does not provide a better theoretical understanding of the benefit of momentum or the advantage of Adam over SGD, in the nonconvex setting. We will provide some of our insights about them below.
- 1. For the former, momentum can accelerate training for convex objective functions (e.g. Nesterov's accelerated gradient method). However, since Adam is usually applied to highly nonconvex functions like the loss for deep neural networks, we do not find it interesting or important to study Adam for convex functions. For neural network training, momentum might also help escape from sharp minima and improve generalization, which we believe is an interesting and important future direction.
- 2. For the latter, our intuition is that, to theoretically prove the advantage of Adam over SGD, one may need to consider different geometries other than Euclidean geometry with $\ell_2$ norm. The reason is that Adam uses coordinate-wise stepsize, which may be helpful for functions with certain geometric properties that favor coordinate-wise updates. We had some preliminary attempts like changing the $\ell_2$ norm in the definition of $(\rho,L_0,L_\rho)$ smoothness (Definition 3.2) to other norms, which unfortunately does not directly work. We will leave it as an important future work as well.
We also thank the reviewer for the suggestion on summarizing the results for Adam in the literature in a table, and will do it in the revision.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I thank the authors for their response and their interesting discussion presented here. I don't have further questions. | Rebuttal 1:
Rebuttal: In the global rebuttal, we prove some preliminary experimental results to help address the concerns of some reviewers regarding the hyper-parameters of Adam, including $\beta,\beta_{\text{sq}}, \lambda$. We train a small MLP on the Cifar10 dataset with Adam. The default parameters are $\eta=0.001, \beta=0.9, \beta_{\text{sq}}=0.999, \lambda=10^{-8}$. In the attached PDF file, we show the evolution of test errors (%) with the number of epochs.
Figure 1 shows that Adam is not sensitive to $ \beta_{\text{sq}}$, which suggests that choosing $ \beta_{\text{sq}}=\beta=0.9$ as in our Theorem 4.1 is as good as the default choice $ \beta=0.9, \beta_{\text{sq}}=0.999$. Figure 2 shows Adam is not sensitive to $\lambda$, either. So we think the dependence on $\lambda$ in our rate is not too bad, given that $\lambda=0.1$ still gives a good convergence in Figure 2.
Pdf: /pdf/d354776aa3adc7abeb3a7279dcffbbf894a06e0f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides convergence results for Adaptive Moment Estimate (Adam) and its variance-reduced variant under generalized smooth and bouned-noise assumptions.
Strengths: This paper mainly studies the convergence of Adaptive Moment Estimate (Adam) under a generalized smooth assumption. The authors drop the commonly used globally bounded gradients assumption and provide a new analysis framework, which is based on a contradiction argument, to show that gradients are bounded along the optimization trajectory. Based on this, the authors could deduce a high probability convergence bound of $\mathcal{O}(\epsilon^{-4})$ for Adam under the generalized smooth assumption. The paper also provides a variance-reduced version of Adam and improves the gradient complexity to $\mathcal{O}(\epsilon^{-3})$.
In general, the assumptions are standard in the analysis of adaptive methods including unbiased gradient estimate and bounded noise. The convergence results make sense and are highly valuable since the convergence result of Adam under unbounded gradients still leaves empty, to my best knowledge. As Adam is a widely used optimizer in deep learning field, the theoretical results (like parameter setting in Theorem 4.1 and Theorem 4.3) in this paper could help practitioners to better understand and use the algorithm. The central idea of analysis is inspiring.
Weaknesses: First, the convergence result requires a dedicated tuned step-size and momentum parameters, specifically requiring prior knowledge such as generalized smooth parameter $(L_0, L_1)$ and the noise level $\sigma$. However, in most realistic situations, these parameters could be unknown or hard to obtain. Thus, the result is more valuable in the theoretical aspect but may not be so useful in the experimental aspect.
Second, the parameter setting in Theorem 4.1 contradicts the real-one default setting on deep learning packages (such as Pytorch or Tensorflow) where $\beta_1 = 0.9 $ is smaller than $\beta_2 = 0.999$. It may be more persuasive to do some experiments showing the convergence under this new parameter setting.
Third, the high probability convergence result in Theorem 6.2 does not provide an optimal rate to the probability margin $\delta$, leaving space for further improvement. In addition, since VRAdam is a new algorithm, it better requires some experiments to show its convergence under the parameter setting in Theorem 6.2 and its faster convergence rate than Adam.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: see the weakness part
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. Below we will have some discussions about the weaknesses in the comments.
- First, it is true that our hyper-parameters depend on problem-dependent parameters like $\rho,L_0,L_\rho,\sigma$, and the theory does not provide many insights regarding how to choose hyper-parameters in practice in a better way than e.g. grid search. However, the main focus of this paper is to bridge the gap between theory and practice by providing convergence guarantees of Adam under more realistic assumptions than previous works.
- Second, regarding the choice of momentum parameters $(\beta,\beta_{\text{sq}})$, we let $\beta=\beta_{\text{sq}}$ in Theorem 4.1 to make the proof easier. We have some (relatively clear) ideas on how to get rid of this requirement and will try to make it formal in the revision. In addition, Adam is not sensitive to the momentum hyper-parameters based on our preliminary experimental results (see Figure 1 in the PDF attached to our global rebuttal to all reviewers), which shows our hyper-parameter choice is as good as the default one. We will add results for more models and datasets in the revision.
- Finally, we leave the improvement of the dependence on $\delta$ as an interesting future work, and also thank the reviewer for the suggestion about adding experiments for VRAdam, which we will consider doing in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for the discussions
Comment: Thanks for the interesting discussions.
Here are two remaining questions:
1) Regarding the choice of momentum parameters $(\beta,\beta_{\text{sq}})$, does one need some restrictions on the relationship $\beta$ and $\beta_{sq}$ to get the convergence? Worth noting, in [Defossez et al. 2022], they stipulated the condition $0 \le \beta_{sq} < \beta$.
2) There appears to be some confusion among readers regarding the intricate requirements for $\beta, \eta, G$ as outlined in Theorem 4.1. Would it be possible to present a concrete numerical parameter configuration that adheres to these conditions? Or, could the parameter choices be simplified in some specific cases?
---
Reply to Comment 1.1.1:
Title: Thanks for the reply
Comment: Below we try to answer the two remaining questions in the reply
1. We use a different analysis from existing works like [Defosses et al. 2022]. In Theorem E.1 (in the case of $0\le\rho<1$), $\beta_{sq}$ can be any constant between $0$ and $1$. In Theorem 4.1, we do require $\beta=\beta_{sq}$ for simplicity. However, as we mentioned in the rebuttal, we believe we can completely get rid of this requirement using a more careful analysis, which we will try to make formal in the revision.
2. We find it hard to further simplify the rigorous parameter choices. However, informally speaking, below are the most important requirements, which we will also add in the revision
- $G$ is a large enough constant depending on problem-dependent constants and initial gradient norm
- $\eta=O(\epsilon^2)$
- $\beta=O(\epsilon^2)$
- $T=O(\epsilon^{-4})$ | null | null | null | null | null | null |
Advancing Bayesian Optimization via Learning Correlated Latent Space | Accept (poster) | Summary: Recent advances in Bayesian optimization have shown that it is possible to exploit latent spaces of variational auto-encoders or generative models to perform the optimization of any function defined over a structured space. However, since the optimization takes place in the latent space. There is an inherent gap between the original problem formulation and the optimization of the latent space. This paper proposed a series of new losses based on the smoothness map of the latent space to enforce a correlation between the function values in the latent space and the original space. The authors propose several losses based on Lipschitz constants and local searches to improve the optimization properties. Finally, the author proposes a series of experiments to show the benefit of their method.
Strengths: I liked Theorem 1 which provides a nice theoretical justification of the innovations of the paper.
Moreover, it provides a nice storyline to explain the main elements of the paper and the loss introduced.
The paper is very clear and easy to follow.
There are a lot of numerical experiments showing the benefits of the method.
Weaknesses: Overall, I really like the problem tackled in the paper. However, I have some questions with regards to the contributions, and more particularly with regards to lots of heuristics used in the algorihm:
- although there is a nice theoretical justification of the new losses introduced (Eq 3, 4 and 9), the method presented in Algorithm 1 relies on a large number of parameters such as k, the kernel used in the Gaussian process, the batch size, the latent space update interval Nfail which might make the method hard to use in practice. For instance, it seems like some parameters presented in the Appendix such as the batch size and k are different depending on the use case. Is there a way to know how to choose those hyperparameters that work in most cases?
- Similarly, for the halving of the trust region. It seems to be a full heuristic. Is there a justification for the value of the frequency to half the search space?
- In Eq 7, how to you choose the value of $y_q$? What is the precise meaning of this term in the loss?
- Bayesian optimization is mainly used when the black-box function is expensive (or costly) to evaluate. Moroever, there exists a large number of other methods for black-box optimization when the function is cheap to evaluate. However, when looking at the different experiments, it seems like the experiments can go up to 60K+ iterations. In this case, how would those methods compare to existing techniques (such as fine-tuned simulated annealing, genetic algorithms, and partitioning techniques such as the DIRECT algorithm) or even multi-start local search methods? Does using Bayesian optimization make a real difference?
- Overall and as a last comment, since the method relies on a large number of hyper-parameters, it seems like it is not realistic to choose them in practice in the case of expensive-to-evaluate black-box functions (which is the core justification of Bayesian optimization) in which functions evaluations are scarce. How could we improve that in practice?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The questions raised have been asked above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are quickly summarized in the conclusion. It could be nice to have a larger section in the Appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly value the feedback provided by the reviewer. We offer responses to address the issues below.
**Q1. Is there a justification for the value of the frequency to halve the search space?**
We adopt every setting for the Trust region from TURBO [16], which is an established technique. For example, we use 32 for the number of consecutive failures as in [16]. We also followed [16], without specifically tuning the value of the frequency to halve the search space.
**Q2. In Eq 7, how do you choose the value of $y_q$? What is the precise meaning of this term in the loss?**
The value of $y_q$ corresponds to a specific quantile of the distribution of $y$. We use the value of $y_q$ as the same as in [21]. We assign weights based on Gaussian Cumulative Distribution Function (CDF), which is centered on $y_q$.
**Q3. Compared to existing techniques like simulated annealing, genetic algorithms, or partitioning techniques, does using Bayesian optimization make a significant difference?**
Here, our work aims to address the challenges of current (latent) Bayesian Optimization approaches. As a result, our primary focus for comparison is directed towards BO methods. While we acknowledge the potential for valuable insights through comparisons with conventional methods, we will add comparisons in the final version.
**Q4. More detailed limitations**
We will improve the limitations and reflect on Appendix in the final version. | Summary: The paper addresses the problem of Bayesian optimization with the help of a low-dimensional latent space (here learned using a VAE augmented with ad-hoc loss terms).
The paper contributes:
- An analysis/recap of the state-of-the-art highlighting issues raised in other works, particularly the need for smoothness (in the latent space) around the optima.
- A concrete loss function augmenting previous work [10] with two additional regularisation terms to ensure smoothness in the latent space w.r.t. the objective function.
- Experimental study demonstrating the advantages over a suitable set of benchmarks on six problems.
- An ablation study examining the effect of some of the loss terms on the final BO task.
Strengths: - Important and relevant research area.
- The background and motivation for the derived algorithm are well-described.
- The included experiments indicate improvements compared to a set of reasonable baselines.
- The resulting algorithm seems sufficiently novel and builds on the intuition about smoothens w.r.t. to the objective function, but I have some concerns/questions…
Weaknesses: Overall weaknesses/comments/suggestions:
- The setup does not come across as principled or consistent from a pure modeling perspective…
- I appreciate the algorithmic-centered view (vs. a probabilistic modeling perspective) adopted in the paper. Still, I am always a bit skeptical when rather ad-hoc loss terms are added to the VAE loss or the joint setup from [10] (which both originate from a proper probabilistic analysis). The combination of and interaction among the many loss terms (in particular the $L_z$, $L_lip$, vs. the VAE prior) would need quite a lot of analysis to work out the exact effect of the combination (and weighting). Currently, I feel the paper lacks this insight.
- $z$ is a random variable, yet the computation of the 2-norm in Eq. 9 is performed without recognizing this. I’d expect an expectation w.r.t.. q(z|x) to be involved in $L_z$ and $L_{lip_W}$ (or a non-central chi distribution if treated analytically)?
- I am unsure why $L_z$ is needed – can this not be controlled directly via the variance on the prior, $p(z)$? Can the smoothness constraints be incorporated/formulated as a more principled prior for z, thus providing a complete probabilistic view and derivation of the algorithm?
- Missing assumptions.
- I think it needs to be argued that it makes sense to globally measure distance in the latent space using a Euclidian distance function.
- It is generally unclear if you assume y is noiseless or not. This seems important in Eq. 9 (and Theorem 1), i.e., do you use $p(y|x, \cdot)$ or simply the observed y when computing $|y_i-y_j$?
- The model/loss is not fully specified in the paper, as far as I can tell. Specifically,
- The weights on the loss terms are left out in Eq 11, giving the impression that there are no hyperparameters related to the loss itself, yet the supplementary clearly indicates that some manual fine-tuning is needed. The main paper should be transparent about this. How sensitive is the performance to the weights on the loss terms, and how should they be set in practice?
- l 155: $L_{joint}$ should be explained in more detail to provide a self-contained definition of the model/algorithm.
- There are no details in the paper or supplementary about the likelihood ($p(x|z)$); I’d suggest adding more information about this beyond a reference to the particular VAEs.
- There are no details about the structure and complexity of the problems or the VAE (e.g. dimensionality of z and its influence)$); I’d suggest providing this at least in the supplementary.
- Experimental suggestions:
- If possible, I would suggest providing a very simple 2D (maybe 3D ) synthetic example (trained with the proposed model) to provide the reader with a better intuition about what’s going on with the many loss terms (c.f. previous comment).
- The ablation study (on one dataset) is interesting yet should probably include removing the $L_{joint}$ term as well.
Minor questions/comments:
- I’d suggest providing (possibly in the appendix) an optimization trace for one example with the total loss and all the individual loss terms included individually.
- l120: I think adaptively changing setting L using the median would require more justification and explanation to clarify the properties of this scheme.
- l212: I am unsure what is meant with “….Thompson sampling in $N(0,I)$...”?
- Figure 5: Is this a result based on a VAE with a latent space with dimension 2 or a projection onto 2D?
- l 263: Fig 5 (b) seems less smooth than Fig 5 (a) as is; consider making the statement clearer (i.e. wrt to the objective)
- Most figures lack indication of axis labels (on the figure itself or in the caption)
- It would be helpful with a few more words/details attached to Eq 7 (the proof) in the supplementary material.
- Some sections contain grammatical issues interrupting the flow, e.g. sec 2.4, l. 165 (missing “the”, “a” etc.)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: The questions are included in the above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See above for technical limitations/issues. There is no need to consider the broader societal impact of the specific paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback. We present comprehensive responses below.
**Q1. Unsure why $\mathcal{L}\_z$ is needed**
The $\mathcal{L}\_{\mathbf{z}}$ serves a specific purpose in our approach. If we were to rely solely on the Lipschitz loss and exclude $\mathcal{L}\_{\mathbf{z}}$, it could lead to a trivial solution where the scale of $\mathbf{z}$ is merely growing. This would be shown in the expression ${|y\_i-y\_j| \over ||\mathbf{z}\_i-\mathbf{z}\_j||\_2}$, which is the Lipschitz loss we minimize, resulting in a collapse of the distance metric in the denominator. To prevent this and ensure a meaningful scaling of $\mathbf{z}$, we introduced the $\mathcal{L}\_{\mathbf{z}}$ loss term. Furthermore, as described in lines 124-125, this term is closely related to Theorem 1, providing additional theoretical justification for its inclusion. We agree that the VAE prior can control these aspects, however, we expect the current formulation of $\mathcal{L}\_{\mathbf{z}}$ provide a targeted and effective solution to a specific challenge we identified in our approach. We also empirically verified its effectiveness.
**Q2. I think it needs to be argued that it makes sense to globally measure distance in the latent space using a Euclidian distance function.**
We appreciate your insight regarding the measurement of distance function. In the context of the Variational Autoencoder, the Euclidean space assumption is widely adopted [a], and this assumption guided our choice to measure distances in the latent space using the Euclidean distance function. It would be good for future work to define and use the geodesic distance on the manifold based on the Riemannian manifold [b].
[a] Kalatzis, D., Eklund, D., Arvanitidis, G., & Hauberg, S. Variational Autoencoders with Riemannian Brownian Motion Priors. International Conference on Machine Learning, 2020.
[b] Pennec, X. Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. Journal of Mathematical Imaging and Vision, 2006.
**Q3. l 263: Fig 5 (b) seems less smooth than Fig 5 (a) as is; consider making the statement clearer.**
We understand your point regarding Figure 5 (b) appearing less smooth than Figure 5 (a). It is possible to interpret the discontinuous portion in the middle of Figure 5 (b) as lacking smoothness. However, our intention was to illustrate the smoothness of the objective value at each location, rather than the visual continuity of the plot itself. Each color in the figure represents a different objective value, and we aimed to show that nearby points have similar colors, reflecting similar objective values. We will clarify the explanation of visual representation in Figure 5 (b).
**Q4. l120: I think adaptively changing setting L using the median would require more justification and explanation to clarify the properties of this scheme.**
We choose $L$ as the median to create a more robust and adaptive mechanism that can respond to varying properties within the data. This adaptive process allows the model to dynamically align with the characteristics of the data at different stages of training or across different datasets. This alleviates sensitiveness of the data and enhance stability in the learning process. We will provide additional analysis in the final version.
**Q5. Unsure on “….Thompson sampling in $N(0,I)$...”?**
We aim to convey is that in our approach using Gaussian Processes (GP), we sample the function and then draw candidate samples from the normal distribution $N(0,I)$ to find the optimal input value.
**Q6. Do you use or simply the observed y when computing $|y\_i-y\_j|$?**
In all the tasks we run, the objective function was noiseless, and we also assumed that y is noiseless. We will clarify this in the final version.
**Q7. How does the combination and interaction of the various loss terms (such as $\mathcal{L}\_z$, $\mathcal{L}\_{lip}$, and the VAE prior) work, and what analysis has been done to understand their exact effect, considering the current paper seems to lack this insight?**
The design of these loss terms each has a crucial role to achieve a specific goal including Lipschitz regularization and latent space regularization. Also, we showcased and analyzed that these loss terms lead to effective optimization in the latent space. We agree that further theoretical analysis would make the paper stronger and will add in the final version.
**Q8. $\mathbf{z}$ is a random variable, yet the computation of the 2-norm in Eq. 9 is performed without recognizing this. I’d expect an expectation w.r.t.. q(z|x) to be involved in $\mathcal{L}_z$ and $\mathcal{L}\_{lip\\_W}$ (or a non-central chi distribution if treated analytically)?**
Your expectation is correct. As you say, we were calculating the loss using the sampled z from Eq.9 and some other equations. We were using the bold to denote the vector. We will remove the bold for distinguishing sampled z and clarify it.
**Q9. Additional questions/suggestions.**
Due to limited space, we were unable to address all raised issues. We commit to correcting all typos and clarifying notations, adding analyses based on your valuable suggestions for the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for responding to my (and the other reviewers') comments/questions.
Re Q1 hyperparameters (summary): Could the authors please elaborate on the statement, “The grid search was each conducted for 20 percent of the total Oracle budget.” including an explanation of when/where exactly this was performed in the process?
Re aw6W-Q1: Thanks for clarifying Eq. 4, and I now appreciate the desire for explicit control over the average distances relying not on $p(z)$. Although I suppose $L_z$ becomes redundant when the aggregated posterior and the prior are very similar, leading to the expected distances given by Eq 5…? I’d suggest providing strong arguments as to why the prior is insufficient to ensure desired properties in general.
Re aw6W-Q7: Please clarify what “ …further theoretical analysis….” implies (i.e., what can we expect in a final version)?
Overall, I remain slightly skeptical of the paper given the many updates needed/promised, but I am leaning towards increasing my score slightly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued discussion. Here are our responses to the points raised:
**Q1. More explanation of the hyperparameter search.**
First, we tuned only two hyperparameters: the loss weights for $\mathcal{L}\_{\mathrm{lip}}$ and $\mathcal{L}_\mathbf{z}$. We performed a grid search with 6 combinations, i.e., [10, 100, 1000] x [0.1, 1] for the two losses. To pick the best hyperparameters, we preliminarily run experiments with 20% of the maximum query budgets. For instance, we set the maximum budget to 70K for Guacamol dataset and simply pick the hyperparameters with the best performance for 14K. So, the total query budget for grid search is 14K x 6 oracle calls.
**Q2. Provide arguments as to why the prior is insufficient. (why $\mathcal{L}_\mathbf{z}$ needed)**
We want to optimize a latent space allowing various distributions of latent $\mathbf{z}$ and we believe that the prior, KL divergence between the standard normal distribution and embeddings $\mathbf{z}$, is stricter than what we need. The prior explicitly requires 0 mean and Identity covariance. Also, the mode of a standard normal distribution is dense and many solutions around the mean have very small pairwise distances. On the other hand, $\mathcal{L}_\mathbf{z}$ does NOT require any specific mean, and the distribution of $\mathbf{z}$ does not need to be either isotropic or unimodal. Lastly, our regularization encourages the distance between solutions in the latent space to be not either too small or large $\| \mathbf{z}\_i - \mathbf{z}\_j \|_2 \approx c$.
**Q3. Clarification on “further analysis regarding the combination and interaction of the various loss terms (VAE loss vs $\mathcal{L}\_\mathrm{lip}$ and $\mathcal{L}\_\mathbf{z}$)”.**
- In Section 3.3 of our main paper, we have presented analyses concerning proposed regularizations. For instance, Figure 4 illustrates the higher correlation between the objective value $y$ and the latent vector $\mathbf{z}$ when employing $\mathcal{L}\_\mathrm{lip}$ and $\mathcal{L}\_\mathbf{z}$. Also, we show a smoother latent space with our proposed regularizations in Figure 5.
- What "further analysis" meant was to conduct additional analyses of the *individual* impacts of each regularization term as you suggested. For example, to understand how does $\mathcal{L}\_\mathrm{lip}$, one of our regularizations, affects the landscape of the latent space, we conducted preliminary experiments with 2D synthetic dataset (generated samples with grid sampling) and trained a VAE model with and without $\mathcal{L}\_\mathrm{lip}$. We chose the Ackley function as the objective function since it is a non-convex function with a large number of local minima, i.e., it has a *non-smooth* landscape. After training, we drew the plots of the latent vectors with corresponding objective values and observed a more locally smooth landscape with $\mathcal{L}\_\mathrm{lip}$. Since we cannot provide more figures at this moment, we will incorporate these analyses into the Appendix of the final version.
- Furthermore, we will include an additional smoothness analysis similar to that in Section 3.3. We will provide plots for two more cases: 1) $\mathcal{L}\_\mathrm{lip}$ alone and 2) VAE loss with $\mathcal{L}\_\mathrm{lip}$. Note that the results presented in Figure 5 of the main paper correspond to the cases of 1) only VAE loss and 2) VAE loss+$\mathcal{L}\_\mathrm{lip}$+$\mathcal{L}\_\mathbf{z}$. | Summary: This paper proposes several heuristic regularization constraints for learning a Bayesian optimization latent space. It argues that the learned latent space needs to be aligned to the black-box function values, and this is achieved via keeping the Lipschitz constant small and the mean latent distance (of training samples) constant. It also argues that promising points need to be prioritized during the latent space optimization, and this is achieved via weighting the reconstruction loss and the Lipschitz regularization loss above at each training point by its function values. The paper evaluates the proposed method on several benchmark functions, yielding positive results.
Strengths: The proposed approach makes some practical sense, and yields good performance on several benchmarks. The approach is also novel, so I believe the paper has some intellectual and practical merits.
Weaknesses: 1. I think the problem is not very clearly described and motivated. What exactly are the "gaps" between the latent space and the input space? Throughout the paper, I have not seen any technical description of this issue. To quote the manuscript, the two main gaps are:
- "First, even though the surrogate model g learned in the latent space the objective value is still obtained by a black-box function defined in the discrete input space X so the gap between objective values and latent spaces leads to poor optimization performance".
- "Second, since the distribution of samples expected to have high objective values is different from that of samples observed in pretraining, there is a gap between the input space and the latent space that makes the optimization inaccurate".
Neither statements give a precise description of what these gaps are, so I cannot be convinced that the proposed solutions are meaningful.
2. A lot of the technical details also lack motivation. Please refer to the specific questions below.
3. Different BO approaches have different per-iteration cost, and it would be slightly unfair to only compare the performance vs. number of oracle calls. I would strongly suggest providing a plot to show performance vs. wall-clock time, or a table documenting the runtime per iteration of each baseline.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. I don't understand why $\mu_{D_Z}$ and $\sigma_{D_Z}$ are not constants the same way $\mu_{D_Y}$ and $\sigma_{D_Y}$ are? And what are they constants with respect to?
2. Eq. (4) seems like a much stricter objective than what is needed -- it means that the points are optimized towards being equidistant. Also, when every $||z_i - z_j||_2 \simeq c$, what will remain for L_lip to optimize?
3. Why does blurring y with Gaussian noise in Eq. (7) make sense? Couldn't we just directly let $\lambda(y) = y_q - y$ (and maybe normalize this by y_max so that the weight is between 0/1) ?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed broader impacts & limitations. I don't see any potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable feedback. Below, we present comprehensive responses addressing the questions raised by the reviewer.
**Q1. Performance comparison w.r.t. wall-clock time.**
We provide the results on Guacamol dataset with respective tasks with respect to wall-clock time as requested. We report the Found Best Score corresponding to the same wall-clock time, determined by the fastest baseline model to reach the 70K oracle calls.
| zale | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.773** | 0.745 | 0.606 | 0.566 | 0.511 |
| Oracle calls | 53K | 33K | 13K | 46K | **70K** |
| Wall clock time (hr) | **2.6** | **2.6** | **2.6** | **2.6** | **2.6** |
| | | | | | |
| med2 | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.379** | 0.352 | 0.330 | 0.312 | 0.312 |
| Oracle calls | 58K | 35K | 20K | **70K** | 58K |
| Wall clock time (hr) | **2.9** | **2.9** | **2.9** | **2.9** | **2.9** |
| | | | | | |
| osmb | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.928** | 0.910 | 0.898 | 0.834 | 0.844 |
| Oracle calls | 67K | 43K | 21K | **70K** | 64K |
| Wall clock time (hr) | **3.5** | **3.5** | **3.5** | **3.5** | **3.5** |
| | | | | | |
| pdop | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.834** | 0.796 | 0.635 | 0.587 | 0.565 |
| Oracle calls | 41K | 35K | 17K | **70K** | 52K |
| Wall clock time (hr) | **2.7** | **2.7** | **2.7** | **2.7** | **2.7** |
| | | | | | |
**Q2. Why use Gaussian noise in Eq. (7) for blurring y instead of directly using $\lambda(y)=y_q-y$?**
We opted for the weight function from [21], as it demonstrated superior performance in contrast to other weight functions. We performed preliminary experiments for weight function selection on Guacamol (w/ pdop). We provide the experimental results and formulations of each weight function in the PDF file. Among these weight functions, Rwr closely aligns with your suggestion (with the addition of a temperature parameter). It is clear that [21] (indicated by the blue line) showcases the most effective performance.
**Q3. Why are $\mu_{D_Z}$ and $\sigma_{D_Z}$ are not constants the same way $\mu_{D_Y}$ and $\sigma_{D_Y}$ are? And what are they constants with respect to?**
Since the input data points are associated with fixed objective values of the black box function, we can consider the $\mu_{D_Y}$ and $\sigma_{D_Y}$ as constants. However, the distance in the latent space (i.e., ${D_Z}$) depends on the mapping function (encoder). Because the mapping function can be changed during the optimization process, $\mu_{D_Z}$ and $\sigma_{D_Z}$ are not treated as constants.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I appreciate the new results. It also seems like you missed one of my question. Can you provide a discussion regarding my Q2?
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! Regarding Q2 (the second question in the "Question" section: about Eq. 4), we have already addressed this inquiry in the above section of our global response. We kindly request you to review our response. | Summary: This paper proposes a latent space Bayesian Optimization approach based on the intuition that distances in the latent space should be correlated with differences in objective value. CoBO iteratively updates a variational autoencoder (VAE) to align distances in latent space with differences in objective function. The method leverages a lower bound on correlation to introduce new regularizers on the VAE objective. Combined with loss weighting and latent space “recoordination”, the method achieves state-of-the-art performance on diverse benchmarks.
Strengths: The paper mostly applies existing techniques to achieve incremental but statistically significant improvement over prior state-of-the-art. Theorem 1 provides a useful result that is a significant contribution of its own, if it is in fact a new bound.
The paper is written very well and will likely serve as a benchmark for future BO algorithms. Algorithm 1 provides a nice, approachable, and thorough description of CoBO. Figures 4-5 show strong evidence that the CoBO objective has a strong positive effect on organizing the latent space.
The experimental results are convincing. An ablation study justifies the inclusion of all components of the proposed VAE objective, albeit on only one optimization benchmark (hopefully these results hold for all benchmarks studied).
I would suggest trying to fit Table 3 from the Supplemental into the body of the paper, as it would strengthen the experimental evaluation. The results show strong performance especially in the low-budget setting.
Weaknesses: It does not seem natural to assume that the mapping $f$ is Lipschitz for a VAE trained in an unsupervised manner. Won’t it be the case that the objective is not Lipschits in the input space of many useful models, especially over discrete spaces? Specifically, do you have any experiments demonstrating Lipschitz continuity of the objective by sampling points in the latent space?
_Note that the “recoordination” technique claimed by this paper previously existed in Maus et al. under the name “recentering”._
Evaluation is a little limited. Do the results hold for all Guacamole tasks? For example, Maus et al evaluates on additional GuacaMol tasks such as ranolazine.
Evalutions on DRD3 only go up to 1,000 evaluations of the objective function. This is understandable given the expense of running the docking objective, but on the other hand, this is the most “real-world” objective studied. Would it be possible to run for longer to determine whether CoBO can eventually beat LOL-BO or GraphGA?
The axes for all plots start at 0; it would be more helpful to include initialization points in the evaluation budget for comparison with prior work.
Specific issues:
* Articles are missing in several locations, including in the abstract (line 12). Please proofread for small errors.
* Equation 4 shows a regularizer that encourages __every__ pair of distances to be exactly $c$; it does not regularize the mean as claimed in the text. Is the text or the equation incorrect?
* Do you recoordinate all points during fine tuning? If so, add this step to Algorithm 1 (I believe it currently only shows recentering for the trust region, not the surrogate).
* Is there a reason that you report the objectives in Arithmetic expressions as a minization problem, but transform the DRD3 task into a maximization task? Retaining the convention of the TDC leaderboard would improve the accessibility of the paper to a wide audience.
Small comments:
* Define “an inherent gap” in the abstract. This notion is referred to multiple times in the paper, but never precisely. Is the claim that there do not exist points in the latent space $z$ that correspond to an optimal point in input space $x$? Or that these regions are small and hard to find?
* It is not immediately obvious in the introduction that this method will leverage a pre-trained VAE that is trained in a purely unsupervised manner.
* In Equation 7, notation seems to be overloaded. Do you mean a quintile of the empirical distribution of objective values seen during training, $y_i$? At line 140, is $sigma$ a hyperparameter or equal to $\sigma_Y$?
* Algorithm 1 line 14: typo “is the best score than”
* When discussing “initialization points” in Section 3, it would be useful to the the notation $D_0$ from Algorithm 1.
* In Figure 2, should “TDL” be “TDC”?
* It would be helpful to label the y-axes in Figure 4.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. The paper ignores the fact that the VAE decoder in many applications is stochastic (e.g. the SMILES decoder from Maus et al.). This would motivate a probabilistic treatment of f(Z) in Theorem 1.
2. In Theorem 1, won’t it always be the case that $D_Y \le D_Z$ by the Lipschitz assumption? Why is this stated as a sufficient condition?
3. On line 120, is this Lipschitz constant computed as the median __per mini-batch__?
4. Equations 4-6 do not appear to be consistent. You derive the distribution of inter-point distances, so why regularize all distances to be the same? Do you think of this as an additional discrepancy measure between your VAE prior and posterior? Can this be connected to the MMD regularization in InfoVAEs?
5. The paper describes the method in terms of standard Gaussian Process kernels. Do you not use a deep kernel, even for the drug discovery tasks?
6. How are hyperparameters determined, particularly in the ablation study?
7. How do you choose the initialization points?
8. Does Figure 4 include the recoordination step?
9. In Figures 4-5, would it be possible to compare to the prior state-of-the-art (LOL-BO) to substantiate the claim that that method does not effectively reorganize the latent space, rather than simply comparing to a fixed latent space?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: I believe that CoBO can only be expected to be effective on objectives that are Lipschitz continuous; this limitation is not mentioned. The author’s consideration of possible malicious uses for drug discovery models is much appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thorough and detailed feedback. We will address the issues below.
**Q1. More evaluations on DRD3.**
We conducted the experiment with DRD3 for a duration of up to 5 days, reaching a maximum of 2500 oracle calls within the available resources. We also provide the optimization curve in the PDF file. CoBO shows the superior performance over baselines.
| DRD3 | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **15.6** | 14.0 | 14.2 | 12.4 | 12.2 |
| Oracle calls | **2500** | **2500** | **2500** | **2500** | **2500** |
| | | | | | |
**Q2. Additional tasks for Guacamol.**
We report three additional tasks (rano, valt, adip) for Guacamol dataset. With limited resources, we set the Oracle budget to 30K. To ensure fairness, we also include the Found Best Score for the same wall-clock time. We also provide the optimization curve in the PDF file. It demonstrated superior performance in additional tasks, aligning with the results of the main paper.
| rano | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.9548** | 0.9244 | 0.9155 | 0.8184 | 0.7775 |
| Oracle calls | **50K** | **50K** | **50K** | **50K** | **50K** |
| Wall clock time (hr) | 14.5 | 11.8 | 9.1 | 5.8 | **4.8** |
| | | | | | |
| Found Best Score | **0.9530** | 0.8329 | 0.9014 | 0.8184 | 0.7775 |
| Oracle calls | 32K | 15K | 17K | 40K | **50K** |
| Wall clock time (hr) | **4.8** | **4.8** | **4.8** | **4.8** | **4.8** |
| | | | | | |
| valt | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.9981** | 0.9961 | 0.9727 | 5e-15 | 2e-31 |
| Oracle calls | **50K** | **50K** | **50K** | **50K** | **50K** |
| Wall clock time (hr) | **3.2** | 10.9 | 21 | 11.5 | 8.0 |
| | | | | | |
| Found Best Score | **0.9981** | 0.9759 | 2e-26 | 2e-26 | 2e-31 |
| Oracle calls | **50K** | 25K | 10K | 15K | 20K |
| Wall clock time (hr) | **3.2** | **3.2** | **3.2** | **3.2** | **3.2** |
| | | | | | |
| adip | CoBO (Ours) | LOL-BO | W-LBO | TuRBO | LS-BO |
| --- | --- | --- | --- | --- | --- |
| Found Best Score | **0.8321** | 0.7746 | 0.7817 | 0.6983 | 0.6983 |
| Oracle calls | **50K** | **50K** | **50K** | **50K** | **50K** |
| Wall clock time (hr) | 27 | 6.7 | 32.6 | **3.1** | **3.1** |
| | | | | | |
| Found Best Score | **0.8321** | 0.7518 | 0.7534 | 0.6983 | 0.6983 |
| Oracle calls | 27K | 33K | 14K | **50K** | **50K** |
| Wall clock time (hr) | **3.1** | **3.1** | **3.1** | **3.1** | **3.1** |
| | | | | | |
**Q3. Is it natural to assume that the mapping f is Lipschitz for a VAE pretrained unsupervisedly, especially over discrete spaces?**
Good question! We do NOT assume that the pretrained VAE is Lipschitz continuous. That’s why we proposed the regularizers in Eq. 3 and 4 in the main paper. Also, we observe that although the input/output of VAE are discrete values, except for the first and last layers, the VAE internally treats the data like continuous variables. All intermediate features including latent variable $z$ are continuous values. Therefore, imposing a Lipschitz constraint on VAE is a reasonable remedy to improve the latent space and reduce the gap between the latent space and the unknown objective function.
**Q4. Is the claim that there do not exist points in the latent space that correspond to an optimal point in input space? Or that these regions are small and hard to find?**
Great question! In general, the latent space has a point that corresponds to an optimal solution in the input space. First, the VAE returns a probability mass function with p in (0,1). Since it cannot be strictly 0, or 1, the generated PMF covers all possible discrete solutions. In addition, the Gaussian distribution in the continuous latent space has a infinite support so the latent vector that corresponds to an optimal solution has non-zero density. Indeed, multiple latent vectors correspond to the same optimal solution. In sum, the optimal point must exist in the latent space but the regions can be small or the likelihood to get the optimal latent vector could be small.
**Q5. The stochastic nature of the VAE decoder.**
We treat the decoder as deterministic by considering the expectation $\mathbb{E}_\theta(p\_\theta(\mathbf{x}|\mathbf{z}))$. We acknowledge that considering the stochastic nature of the decoder can provide a more precise understanding, thus we will reflect to our future works.
**Q6. About "recoordination".**
The concept of both [10] and CoBO is similar, but there are some differences in implementation, especially in deciding the center. In [10], there's a possibility of retrieving the latent space before updating VAE, due to implementation issues. [10] determines the center based on latent space $\mathbf{z}$ with the decoded input w.r.t. the best objective value $y$, whereas ours uses the latent space $\mathbf{z}$ with the input w.r.t. the best objective value $y$. We empirically showed better performance over [10]. Also, we only recoordinate the center of the trust region, rather than all points during fine-tuning.
**Q7. Do you have experiments showing Lipschitz continuity of the objective?**
We agree that understanding the Lipschitz continuity of the objective value is an essential aspect of our study. To demonstrate the Lipschitz continuity, we measure the change of Lipschitz loss over multiple methods (i.e., we do not utilize Lipschitz loss for baselines). The curve is in the PDF file. The Lipschitz loss increased at the beginning of the training process, due to the differences in the observed objective values.
**Q8. Additional questions/suggestions.**
Due to limited space, we were unable to address all raised issues. We commit to correcting all typos and clarifying notations in line with your valuable suggestions, for the final version.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you to the authors for a thoughtful rebuttal, and especially for providing new experiential results. I remain supportive of this paper, as it appears to be a current leading Bayesian optimization approach for de novo molecular design tasks. I remain conflicted, though, because of the limited methodological novelty and incremental improvement in results.
After reviewing the additional experiments, I would like to verify that the authors are treating the DRD3 task as a _minimization_ problem. If the objective values shown in the plot are the _negative_ of the DRD3 objective on the [leaderboard](https://tdcommons.ai/benchmark/docking_group/drd3/), then it appears from the results in the rebuttal PDF that LOL-BO is performing better for small query budgets than reported in Maus et al.
It appears that the new experiments are similar to the results in the paper: CoBO provides small gains over LOL-BO (e.g. for Ranolazine), especially for small query budgets.
Can you confirm whether Theorem 1 provides a novel contribution or whether this is restating a well-known theory, in your view?
Please ensure that you explain the difference between recentering and coordination in your camera-ready version to avoid confusion.
If possible, please correct figure x-axes to include initialization points in the query budget.
---
Reply to Comment 1.1.1:
Comment: Thank you for your swift feedback. Here's a summary of our response:
**Q1. Clarification on DRD3 task.**
A1. In the Fig. 3 (b), we took out the minus sign from the function value and plotted it as a maximization problem. In the final version, we will ensure that the plot for the DRD3 task accurately represents it as a minimization problem as suggested. The performance difference between our experimental results in the attached PDF file and the original paper (Maus et al.) may come from hyperparameters. Since we couldn’t find the original implementation details for LOL-BO+DRD3 task, we independently searched for appropriate hyperparameters. Also, note that we could only conduct a single experiment for each model due to the limited time/resources. We will conduct more comprehensive experiments for the final version possibly with error bars.
**Q2. Small gain over LOL-BO in additional experiments (for small query budgets).**
A2. In the PDF file, the first figure on the top shows that CoBO outperforms every baseline in all four tasks, and even for small query budgets (e.g., < 20,000 oracle calls), CoBO shows superior performance (e.g., Adip, Rano(lazine)). We notice for Valt, CoBO may show a relatively small gain over LOL-BO when the performance is saturated.
**Q3. Novelty on Theorem 1.**
A3. To the best of our knowledge, our Theorem 1 is novel. We could not find a theory similar to Theorem 1.
**Q4. Clarification for recentering/recoordination + Correction on figure (x-axes).**
A4. Thank you for your thorough and detailed comments! We will clarify the ”recoordination” part in the final manuscript and correct the x-axes to include the initialization points. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thorough and thoughtful feedback. We will address all issues raised by the reviewers below. Especially, we will provide answers to common questions that we have received from multiple reviewers in this General section.
**Q1. Regarding hyperparameter search and selection. (Bvzj, aw6W, ynAb)**
We grid search coefficients of our proposed regularizations $\mathcal{L}\_{\text{lip\\_w}}$ and $\mathcal{L}\_{\mathbf{z}}$, in the range of [10,100,1000]/[0.1,1]. The grid search was each conducted for 20 percent of the total oracle budget. The final coefficients are mentioned in Appendix E.1.
For other hyperparameters such as coefficients for other losses, batch size, learning rate, … etc, we fix the coefficient as in Table 5 from Appendix E.1, as we did not do further searching. Rather, we utilized hyperparameters from previous methods, e.g., the hyperparameters related to trust region in TURBO [16]. For the batch size, we chose values within the range allowed by GPU memory, and for initialization points, we mostly followed [15] and if exist, we followed the official evaluation settings from the benchmark dataset (e.g., DRD3).
We thank all reviewers for suggestions and we will reflect the above details in the Appendix for the final version.
**Q2. Confusion regarding Eq. 4 in the main paper. (Bvzj, 4zSv)**
We apologize for the confusion. Our intention is to control the average of the $\mathbf{z}$ distances, not to optimize for all distances between points to be equidistant. Thus, Eq. 4 should be revised as below:
$\displaystyle\mathcal{L}_{\text{z}} = {1\over N^2} \sum\_{i,j\le N} \bigg|||\mathbf{z}_i - \mathbf{z}_j||_2-c\bigg| \rightarrow\mathcal{L}\_{\text{z}} = \bigg|{1\over N^2} \sum\_{i,j\le N} ||\mathbf{z}_i - \mathbf{z}_j||_2-c\bigg|$
**We provide additional experiments on the attached PDF file.**
These are details on references in the PDF file:
[A] Rubinstein, R. The cross-entropy method for combinatorial and continuous optimization. Methodology and computing in applied probability, 1999.
[B] Tripp, A., Daxberger, E., & Hernández-Lobato, J. M. Sample-efficient optimization in the latent space of deep generative models via weighted retraining. Advances in Neural Information Processing Systems, 2020.
[C] Peters, J., & Schaal, S. Reinforcement learning by reward-weighted regression for operational space control. International conference on Machine learning, 2007.
Pdf: /pdf/c7126897464e6bddfcc04ef9cbb8e2918477de58.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Closing the gap between the upper bound and lower bound of Adam's iteration complexity | Accept (poster) | Summary: This paper presents a convergence analysis of Adam under only the smoothness and bounded variance conditions.
Strengths: - The strength of the paper is to present a convergence analysis of Adam under only the smoothness and bounded variance conditions, in contrast to the existing analyses (Section 3). We are interested in the analysis of Adam under only the two conditions, since the conditions are more natural and realistic than the boundedness of the gradient norm of the objective function and the Lipschitz continuity of the stochastic gradient.
- The abstract indicates that "Especially with properly chosen hyperparameters, we derive an upper bound of iteration complexity of Adam and show that it meets the lower bound for first-order optimizers." Proposition 1 shows that the iteration complexity $\mathcal{C}$ of a first-order optimizer is $\Omega (1/\epsilon^4)$, that is, there exist $c_1, c_2 > 0$ such that $c_1/\epsilon^4 \leq \mathcal{C} \leq c_2/\epsilon^4$. Theorem 2 implies that Adam satisfies that $\mathcal{C} = O(1/\epsilon^4)$. Hence, the paper concludes the claim of the abstract.
Weaknesses: I understand the motivation of the paper. Meanwhile, many theoretical and practical results of Adam have been presented. Hence, unfortunately, I do not find that the results in the paper are surprised as compared with the existing ones. Please see Questions.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I have the following concerns. If the authors could address the concerns, then I think that the paper has the contribution and novelty.
- Theorem 1/Theorem 2 implies the upper bounds of the gradient norm of Adam using a constant step-size $\eta$/a diminishing step-size $\eta = a/\sqrt{T}$. The theorems lead to the finding that, e.g., (Lines 159--162) "Theorem 1 holds for general choices of hyperparameters since the only condition posed on hyperparameters is $\beta_1 < \beta_2$. Such condition covers a wide range of hyperparameters, e.g., the default setting $\beta_1 = 0.9$ and $\beta_2 = 0.999$ in PyTorch [19]." I understand that using $\beta_1$ and $\beta_2$ close to 1 is useful to implement Adam. However, I cannot understand that Theorem 1 implies the claims in (Lines 159--162). Does Theorem 1 imply that $\beta_1, \beta_2 \approx 1 \Rightarrow$ the right-hand side of (2) is small? Can the authors show that the right-hand side of (2) is a decreasing function with respect to $\beta_1$ and $\beta_2$?
- The above discussion holds for Theorem 2 and Lines 250--253. Does Theorem 2 imply that $a, b \approx 0 \Rightarrow$ the upper bound of the gradient norm is small? Can the authors show that it is decreasing when $a$ and $b$ are small?
- The above two concerns mean that there is a gap between theory (Theorems 1 and 2) and practice ($\beta_1 = 0.9$ and $\beta_2 = 0.999$). Can the authors bridge the gap?
- Can the authors provide lower bounds of the gradient norm of Adam, such as $\mathbb{E}\sum_{i=1}^T \Vert \nabla f(w_t) \Vert \geq$ $C \times$ (the right-hand side of (2))?
- Paper [25] and its references are sufficient for both theory and practice. Can the authors show more novelty and contributions from Theorems 1 and 2?
- Related to the above concern, I believe that practical results (e.g., numerical results and new setting of hyperparameters) are provided based on the novelty and contributions of the paper. Can the authors provide such practical results? For example, it would be nicer to compare numerically Adam using $\beta_1 = 0.9$ and $\beta_2 = 0.999$ with Adam using the new setting of $\beta_1$ and $\beta_2$ based on Theorems 1 and 2.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and time. After reading the review, we realize that there could be misunderstandings over our contribution, which we try to clarify as follows.
**On the novelty and contribution.** From the theoretical perspective, characterizing the upper bound and lower bound of iteration complexity of an algorithm is the most fundamental problem in ML/optimization community. Thus, closing the gap between the upper bound and lower bound should be considered as a significant contribution, as also acknowledged by other reviewers. We argue that it becomes more than novel by addressing a critical gap in the understanding of Adam's performance given that Adam is currently one of the mainstream optimizers for deep learning.
Other concerns are dually addressed below.
**Other concerns:**
**Q1**: I cannot understand that Theorem 1 implies the claims in (Lines 159--162).
**A1**: There could be a misunderstanding. " Theorem 1 holds for general choices of hyperparameters" means that for a wide range of hyperparameters, you can apply Theorem 1 to get an estimation of gradient norm (i.e., Eq. (2)). Such a claim has nothing to do with when right-hand side of (2) is small and is only about the applicable region of Theorem 1.
However, we thank the reviewer for asking an interesting question that "Does Theorem 1 imply that $\beta_1,\beta_2\approx 1\rightarrow $ the right-hand side of (2) is small", which we try to answer in **A3**.
**Q2**: Does Theorem 1 imply that $\beta_1,\beta_2\approx 1\rightarrow $ the right-hand side of (2) is small?
**A2**: Thanks for asking. Below we discuss the parameter settings of $\beta_1$ and $\beta_2$ respectively, and we will include the discussion in the revised paper.
**As for $\beta_2$**: Our result does indicate that "to make the right-hand-side of (2) small, $\beta_2$ needs to be chosen close to 1". Specifically, according to Lines 169-178 in our paper, $\beta_2$ needs to be picked as $1-O(1/T)$ to to minimize the the right-hand-side of (2), which is close to $1$ when $T$ is large (which is the case in practice). Therefore, our Theorem 1 agrees with the practical use of $\beta_2$.
**As for $\beta_1$**: Our result does not indicate that "setting $\beta_1$ close to $1$ is necessary to ensure the right-hand-side is small". This agrees with practice, where $\beta_1$ is not necessarily chosen close to $1$. As an example, in the well-worned work "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", $\beta_1$ is chosen as $0.5$, which is not close to $1$.
**Q3**: Similar question for Theorem 2. Does Theorem 2 imply that $a,b\approx 0$ leads to a small bound?
**A3**: Thanks for asking. We guess that the reviewer is talking about $b$ because $a$ is the hyperparameter of the learning rate. Here we do not require $b\approx 0$ because we set $\beta_2 = 1-\frac{b}{T}$, which is already close to $1$ when $T$ is large.
**Q4**: The above two concerns mean that there is a gap between theory (Theorems 1 and 2) and practice.
**A4**: Based on **A2** and **A3**, we respectfully argue that there is no such gap between theory and practice. Instead, our theorem agrees with the practice where $\beta_2$ is close to $1$ and $\beta_1$ is less constrained.
**Q5**: Can the authors provide lower bounds of the gradient norm of Adam as the right-hand-side of (2)?
**A5**: We thank the reviewer for the suggestion, but have to point out that such kind of bound is missing even for SGD to our best knowledge. This is beyond the scope of our paper and should be treated as a challenging future direction.
**Q6**: Paper [25] and its references are sufficient for both theory and practice.
**A6**: There could be a misunderstanding. As stated in Section 3, "None of existing upper bounds match the lower bound", including Paper [25] and its references. Specifically, Paper [25] can only get a sub-optimal iteration complexity $O(1/\varepsilon^6)$, which is even slower than SGD. Our paper is the first to obtain an upper bound of iteration complexity of Adam matching the lower bound, which, as discussed ealier, is a novel contribution.
**Q7**: Can authors provide empirical evidence for this paper? For example, it would be nicer to compare numerically Adam using and with Adam using the new setting of $\beta_1$ and $\beta_2$.
**A7**: We thank the reviewer for the suggestion. However, as pointed out in **A2, A3, and A4**, there is no such a gap between the practical setting of $\beta_1$ and $\beta_2$ and "the new setting". Nevertheless, we include a toy example conducted by us and a real-world experiment by existing works to demonstrate the superiority of the choice $\beta_2=1-\Theta(\frac{1}{t})$ over other choices (please see the pdf in the general rebuttal).
---
Rebuttal Comment 1.1:
Title: A2: As for $\beta_1$
Comment: Thank you for your comments. I trained ResNet-18 on CIFAR-10 using Adam and checked that Adam with $\beta_1 = 0.9$ performs better than Adam with $\beta_1 = 0.5$. Hence, I doubt your claim. Please provide evidences such that Adam with $\beta_1 = 0.5$ is good without previous results.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the quick response. We realize that there could be misunderstandings regarding the role of $\beta_1$ in our analysis and in our rebuttal, which we clarify as follows.
Specifically, the reviewer asks us to justify the superiority of $\beta_1=0.5$ over $\beta_1=0.9$ "without previous results". However, neither our paper nor our rebuttal intends to claim that "$\beta_1=0.5$ is superior to $\beta_1=0.9$ in all cases".
We want to reiterate that the goal of our paper is to **characterize the optimal convergence rate of Adam with regards to the first-order accuracy $\epsilon$, i.e., Omega(1/\epsilon^4) , where the optimal is in the sense of closing the gap between the upper bound and lower bound.** In Theorem 1 and Theorem 2, we characterized the range of beta1 and beta2 that can achieve this optimal rate, and **both $\beta_1=0.5$ and $\beta_1=0.9$ (and a wide range of $\beta_1$) can lead to the optimal iteration complexity**, which does not contradict with the commonly used $\beta_1=0.9$ in Adam. Our theorem (or any other existing analysis of Adam) does not intend to / cannot determine which $\beta_1$ is best within the achievable range or whether $\beta_1\approx 1$ is beneficial for Adam for every task, which is very challenging due to that
1. Understanding the effect of momentum over non-convex objectives is still an open problem even for SGD with momentum. To our best knowledge, the benefit of momentum is only shown over strongly convex objectives. That being said, **none of the existing analysis of Adam can be used to show that the convergence is fast when $\beta_1\approx 1$**.
2. That "$\approx 1$" should be carefully defined. The reviewer wants something like "the bound is decreasing with respect to $\beta_1$" and thus the bound is minimized when "$\beta_1\approx 1$". However, **this simply can not be true** because when $\beta_1=1$, the momentum is not updated across the training process (please also refer to the experiment with $\beta_1=0.99$ below). Therefore, even using $\beta_1=0.9$ is beneficial for Adam, it does not indicate that choosing $\beta_1$ too close to $1$ is beneficial. Such nonmonotonicity makes the analysis more complicated.
**In our last response**: **We never claimed that $\beta_1=0.5$ is superior to $\beta_1=0.9$ in all cases**. We only intend to claim that there are cases (for example, DCGAN) where $\beta_1=0.5$ can also work well according to the above reasoning, and this does not contradict with that there are settings where $\beta_1=0.9$ is better. Instead, it indicates that the optimal $\beta_1$ can be task-dependent and hard to predict. We emphasize again that both $\beta_1=0.9$ and $\beta_1=0.5$ can be used to achieve the optimal iteration complexity as discussed above. The reviewer says "I doubt your claim", but it is the claim made by the DCGAN paper (more than 15000 citations) instead of us, which states that "suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training".
**As for experiments**: We train ResNet 18 over CIFAR 10 with batch size 256 and learning rate $0.001$. Due to restriction of time and resources, we choose the epoch number as $50$ and record the training accuracy for different $\beta_1$. The results are listed as follows.
| $\beta_1$| 0.5 | 0.9 | 0.99 |
|---|---|---|---|
|Training Accuracy | 92.13 | 92.29 | 91.61 |
One can observe that changing $\beta_1$ from $0.9$ to $0.5$ leads to little decrease in accuracy compared to that of changing $\beta_1$ from $0.9$ to $0.99$. **This indicates that $\beta_1$ can not be picked too close to $1$ to ensure a good performance of Adam**.
We hope that our response addresses your concern, and looking forward to your response. | Summary: This paper analyzes the iteration complexity of Adam. It Is first pointed out that upper bounds in prior work do not match existing lower bound; the reason is that the lower bound is proved under smoothness and bounded noise variance, while prior upper bounds make more assumptions. This paper then proves a general upper bound (Theorem 1) which only requires smoothness and bounded noise variance and matches the lower bound up to a logarithmic factor. Later in Theorem 2, a refined analysis is given which further removes the logarithmic factor, and thus giving an upper bound that matches the lower bound exactly.
Strengths: This paper analyzes the iteration complexity of Adam, which is a very important problem given Adam's popularity. Moreover, it is pointed out that existing upper bounds do not match lower bound, and this paper closes this gap, which is a nice contribution. The proof techniques may be of independent interest, such as the peeling-off strategy to handle the dependency between the momentum and the adaptive learning rate.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Can SGD match the lower bound? Alternatively, can we show that Adam converges faster than SGD, which is usually observed in practice?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive feedback. Your concern is dually addressed as follows.
**Q**: Can SGD match the lower bound? Alternatively, can we show that Adam converges faster than SGD, which is usually observed in practice?
**A**: Thanks for asking. SGD can also meet the lower bound according to [1], and thus our results show that Adam can achieve the same order of convergence rate as SGD (which is the first time to our best knowledge). We acknowledge that our result can not be used to show that Adam converges faster than SGD since both meet the lower bound, but conjecture that this is because our analysis is a worst-case analysis. In order to show that Adam converges faster than SGD, we might need to carefully model the structure of neural networks and restrict ourselves to a more specific subset of the objective function space. We leave this as a future work.
**Reference**:
Ghadimi et al., Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming, 2013
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal! | Summary: Adam is one of the most popular stochastic optimization algorithms especially in deep learning, the existing convergence theories do not achieve a tight upper bound that meets the lower bound. Moreover, many of them require additional assumptions, such as bounded gradient. This paper shows that Adam can achieve the tight bound of $O ( \epsilon^{-4} )$ without such additional assumptions. This result closes the gap between the practical success of Adam and the theoretical sub-optimality.
Strengths: - The theoretical result is strong. Although there are many existing works that analyze the convergence of Adam, this paper is the first one that proves that Adam can converge with $\mathcal{O} ( 1 / \sqrt{T} )$, or equivalently $\mathcal{O} ( \epsilon^{-4} )$.
- Technical improvements for deriving the result are interesting and clearly explained.
- This paper is easy to read, and the presentation is also clear at least to the experts on the convergence theory of stochastic optimization.
Weaknesses: **Relation to [1] is not clear**
Although I am basically positive about the result of this paper, I am not sure about how it relates to the well-known result about the non-convergence behavior of Adam by [1]. They showed that there always exits a problem in which Adam fails to converge, but the authors' result seems that Adam can always converge with the optimal rate (i.e., $\mathcal{O} ( \epsilon^{-4} )$). In my current understanding, this is because Theorem 2 requires $1 - \beta_2 = \Theta ( 1 / T )$, which means that, in order to ensure the convergence of Adam, we need to choose the hyper-parameter $\beta_2$ depending on the total number of parameter updates $T$. I think it would be better to clarify the relationship clearly in Section 7. Though the authors mention [1] in the section, the relation to their theoretical result is not clear to me. When it becomes clear in the rebuttal period, I will raise my score.
**There are no experiments**
- I think the condition of $1 - \beta_2 = \Theta ( 1 / T )$ is crucial to achieve the optimal convergence rate, so it would be better to demonstrate it experimentally (I think a toy experiment is enough).
**Minor comments**
- The notations of the output of Adam are inconsistent in Algorithm 1 ($\boldsymbol{w}_r$) and Theorem 2 ($\boldsymbol{w}_\tau$), which is a little confusing.
- Finishing a paper with Related work is not common in my opinion.
- It would be better to add equation numbers to all the equations for the ease of communication between the reviewers and the authors.
**Typos**
- line 20: uderstand -> understand
- line 198: $\boldsymbol{G_{t-2} \rightarrow G_{t-1}}$
- line 248: pratice -> practice
**References**
[1] Reddi, Sashank J., Satyen Kale, and Sanjiv Kumar. "On the Convergence of Adam and Beyond." International Conference on Learning Representations. 2018.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: **Relation to [1]**
As I mentioned in the weaknesses section, I would like to know the relation between the results of this paper and [1].
**What happens when using diminishing step size?**
In the analysis on SGD, diminishing step size (e.g., $\eta_t = \Theta ( 1 / \sqrt{t} )$ is often used, because it is close to practical situations. Is it easy to extend this result to the case of diminishing step size?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are mentioned in Section 6, which is clear to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the constructive comment and positive feedback. The raised typos have been dually corrected, and other concerns are dually addressed below.
**Q1**: Relation to [1] is not clear.
**A1**: Thanks for asking. We did not include a discussion on this because there has been a existing work [2] explaining why one can still establish a convergence result of Adam given the counterexample in [1] (**please see Section 4 in [2] for details, and we will include a discussion on this in the revised paper**). The counterexample in [1] is constructed by first choosing the hyperparameters of Adam ($\beta_1$ and $\beta_2$) and then **adversarially** choosing the objective function for the chosen $\beta_1$ and $\beta_2$. Note that the noise variance $\sigma$ of the chosen objective function has a dependence over the chosen $\beta_1$ and $\beta_2$. One the contrary, if the objective function is fixed, then one can choose appropriate $\beta_1$ and $\beta_2$ so that Adam converges. This is exactly the setting in our paper (which is also in most optimization papers), where convergence results are provided after the configs of the objective function (including $L$ and $\sigma$) are fixed. Therefore, there is no contradiction between [1] and our result.
**Q2**: There are no experiments.
**A2**: Thanks for the suggestion. We will include the following discussion in the revised paper.
1. We conduct an experiment over the same toy example in (Figure 4, [2]) and plot the result in the pdf of the general rebuttal. Concretely, the objective function is defined as $f(x)=\frac{\sum_{i=0}^9 f_i(x)}{10}$, where $f_0(x)=(x-3)^2$ and $f_i(x)=-0.1(x-\frac{10}{3})^2$ for $ 1\le i\le 9$. In each iteration, we uniformly sample $i$ from $[0,9]$ and use $\nabla f_i$ as the gradient. We compare Adam with different schedulers of $\beta_2$, respectively $\beta_2 = 1- \frac{0.1}{t}$, $\beta_2 = 1- \frac{0.1}{t^{0.5}}$, and $\beta_2 = 1- \frac{0.1}{t^2}$ (where $t$ is the number of iteration), and observe that Adam with $\beta_2 = 1- \frac{0.1}{t}$ maintains the fastest convergence, which aligns with our observation.
2. Also, we notice that there are real-world experiments in [3], which show that Adam with $\beta_2=1-\Theta(1/T)$ converges the fastest. We also include their result in the pdf of the general rebuttal.
**Q3**: The notations of the output of Adam are inconsistent in Algorithm 1 and Theorem 2.
**A3**: Thanks for pointing it out. We will revise the paper and use the notation of the output of Adam in Algorithm 1 in both two places.
**Q4**: Finishing a paper with Related work is not common in my opinion.
**A4**: Thanks for pointing it out. We will reorganize the paper and put Related Work after Section 3.
**Q5**: Is it easy to extend this result to the case of diminishing step size?
**A5**: Thank you for the question. Indeed, it is straightforward to extend the result to Adam with a diminishing learning rate by following a similar approach as in Theorem 2 (Theorem 1) and the corresponding rate will only differ from that in Theorem 2 by a $\log T$ factor. This is because, in the proof of Theorem 2, we first analyze the descent lemma for each iteration and subsequently sum them together across iterations. To accommodate a diminishing step size, we can simply replace the learning rate in the analysis of the descent lemma and make minor adjustments when summing them together. We will include this discussion in the revised version of the paper.
**References**:
[1]. Reddi et al., On the Convergence of Adam and Beyond, 2019
[2]. Zhang et al., Adam can converge without any modification on update rules, 2022
[3]. Zou et al., A Sufficient Condition for Convergences of Adam and RMSProp, 2018
---
Rebuttal Comment 1.1:
Title: Thanks for rebuttal!
Comment: Regarding related work, can the authors discuss more about the relationship with [25] in terms of analysis technique?
---
Reply to Comment 1.1.1:
Title: Thank you for your suggestion!
Comment: Dear AC,
We thank AC for your insightful suggestion. We are more than happy to discuss the technical differences with [25] in more detail in our paper. Specifically, we will add the following paragraph in Section 4.2 - Proof Sketch.
> Recently, [25] proved that Adam converges without correcting the update. We discuss the technical differences between our work and [25] here.
> The most fundamental difference is that the assumptions in [25] are stronger: compared to our paper, [25] additionally assume that the objective function follows an $n$-sum structure, and the stochastic gradients satisfy the $L$-smooth condition (while we only assume this condition holds for full gradients). This makes many of the proof techniques in [25] inapplicable to our paper. Specifically, the proof in [25] can also be roughly divided into addressing Challenge I and Challenge II. We discuss the differences in solving each challenge respectively.
>1. In addressing Challenge I, [25] and we have the following differences:
> * Different surrogate learning rates: [25] use $\nu_{t-1}$ as a surrogate conditioner to disentangle the stochasticity in momentum and adaptive learning rate, while we use $\tilde{\nu}_{t}^i$.
>* Different proof ideas: [25] conduct a case analysis: if the gradient norm is large, then $\nu_{t}$ and $\nu_{t-1}$ are close, allowing $\nu_t$ to be converted to $\nu_{t-1}$; if the gradient norm is small, the first-order term can be directly bounded by the product of gradient norm and update norm. Note that the claim "if the gradient norm is large, then $\nu_{t}$ and $\nu_{t-1}$ are close" requires the $n$-sum structure and $L$-smooth condition for stochastic gradients, and thus cannot be applied in our paper. Instead, our paper proves that the approximation error introduced by converting $\nu_t$ to our surrogate conditioner $\tilde{\nu}_{t}^i$ is at the "Second Order" term level, so the accumulation of approximation errors can be bounded.
>2. In addressing Challenge II, due to the assumptions of the $n$-sum structure and the $L$-smooth condition for stochastic gradients, [25] directly convert the surrogate conditioner $\nu_{t-1}$ to the gradient norm and directly obtain a bound on the gradient norm. However, we do not have such assumptions, so we first obtain a bound on the sum of $E[\Vert\sqrt[4]{\tilde{\nu}_{t}^{1}} \Vert ]$ from Eq. (4), and then use Cauchy's inequality to obtain the final gradient norm bound.
Once again, we appreciate your valuable suggestions. If you feel that there are still places in our paper that need improvement, please feel free to point them out! | Summary: This paper gives a new analysis of the Adam algorithm intended to close the gap between the upper bound of Adam's iteration complexity and the existing lower bound for first-order nonconvex optimization. The authors show that existing analysis of Adam either uses the bounded gradient assumption, achieves a suboptimal iteration complexity, or relies on the mean-squared smoothness assumption. Afterwards, they give a novel analysis of the algorithm that meets the lower bound in dependence on the desired accuracy $\epsilon$.
Strengths: - The new analysis of Adam is conducted under the same assumptions as standard SGD, unlike most prior work.
- The new analysis of Adam achieves the optimal $\frac{1}{\epsilon^{4}}$ complexity with no additional log factors.
- The authors introduce several technical tools that can be helpful in the analysis of adaptive algorithms more generally, for example the stochastic surrogates used are new.
Weaknesses: - My main problem with this paper is that the proof, as it is, is very complicated to check. It would be very helpful if the authors included a section in the deterministic case (no stochasticity) with their full proof in this simplified setting.
- The bounded variance condition is a bit restrictive (see [1]), can the convergence of Adam be derived under any of the more general conditions mentioned in [1]?
- (Minor typos) line 134 should be log 1/\epsilon not log \epsilon. Line 198 $G_{t-1}$ not $G_{t-2}$. Please use \left and \right for braces in line 229.
[1] Ahmed Khaled & Peter Richtárik, Better Theory for SGD in the Nonconvex World, TMLR 2023
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Please address my concern in the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for your constructive comment and positive feedback. The raised typos have been corrected and below we dually respond to your comments.
**Q1**: Proof in the deterministic case will be helpful.
**A1**: Thanks for the helpful suggestion. We will take your advice and reorganize Section 4.2 by (i). first providing the proof in the deterministic case and then (ii). introducing the additional challenges in the stochastic case and how to solve it. Below we sketch the proof in the deterministic case.
**Stage I.** We no longer need surrogate conditioner $\tilde{\nu_{t}^i}$ to disentangle the correlation between $m_t$ and $\nu_t$ and we only need to handle the mismatch between $m_t$ and $G_t$ in the first-order term $-\eta \langle G_t, \frac{1}{\sqrt{\nu_t}}\odot m_t\rangle$. To handle this, we first apply the definition of $m_t = (1-\beta_1) (G_t+\beta_1 G_{t-1}+ \beta_1^2 G_{t-2}+...) $, and write $$-\eta \langle G_t, \frac{1}{\sqrt{\nu_t}}\odot m_t \rangle= -\eta (1-\beta_1)( \langle G_t, \frac{1}{\sqrt{\nu_t}}\odot G_t\rangle + \beta_1 \langle G_t, \frac{1}{\sqrt{\nu_t}}\odot G_{t-1}\rangle+\beta_1^2\langle G_t, \frac{1}{\sqrt{\nu_t}}\odot G_{t-2}\rangle+...).$$
We proceed by approximating $G_{t-i}$ by $G_t$ in the above equation, and have
$$-\eta \langle G_t, \frac{1}{\sqrt{\nu_t}}\odot m_t \rangle= -\eta (1-\beta_1)( \langle G_t, \frac{1}{\sqrt{\nu_t}}\odot G_t\rangle + \beta_1 \langle G_{t-1}, \frac{1}{\sqrt{\nu_t}}\odot G_{t-1}\rangle+\beta_1^2\langle G_{t-1}, \frac{1}{\sqrt{\nu_t}}\odot G_{t-2}\rangle+...) +\text{Error Term}.$$
In the right-hand-side of the above equation, all terms except the "Error Term" (which is in the same order as the second-order term in the descent lemma) are negative. Applying the above equation to the descent lemma and summing it over iterations gives a bound of $\sum_{t=1}^T\sum_{l=1}^d \frac{\vert G_{t,l} \vert^2}{\sqrt{\nu_{t,l}}}$.
**Stage II.** Based on the observation that $(1-\beta_2)G_{t,l}^2= \nu_{t,l}-\beta_2 \nu_{t-1,l}$, we have $\frac{\vert G_{t,l} \vert^2}{\sqrt{\nu_{t,l}}} = \Omega (\sqrt{\nu_{t,l}}-\sqrt{\beta_2} \sqrt{\nu_{t-1,l}})$, which can transfer the bound of $\sum_{t=1}^T\sum_{l=1}^d \frac{\vert G_{t,l} \vert^2}{\sqrt{\nu_{t,l}}}$ into a bound of $\sum_{t=1}^T\sum_{l=1}^d \sqrt{\nu_{t,l}}$. Due to $$ (\sum_{t=1}^T\sum_{l=1}^d \frac{\vert G_{t,l} \vert^2}{\sqrt{\nu_{t,l}}})(\sum_{t=1}^T\sum_{l=1}^d \sqrt{\nu_{t,l}})\ge (\sum_{t=1}^T \sum_{l=1}^d \vert G_{t,l} \vert )^2,$$ applying bounds of $\sum_{t=1}^T\sum_{l=1}^d \frac{\vert G_{t,l} \vert^2}{\sqrt{\nu_{t,l}}}$ and $\sum_{t=1}^T\sum_{l=1}^d \sqrt{\nu_{t,l}}$ provides the estimation of gradient norm and concludes the proof.
**Remark**: Compared to the stochastic case, Stage I is much more simplified and no longer needs to disentangle the randomness in $m_t$ and $\nu_t$, and Stage II is also much more simplified when transforming the bound of $\sum_{t=1}^T\sum_{l=1}^d \frac{\vert G_{t,l} \vert^2}{\sqrt{\nu_{t,l}}}$ into a bound of $\sum_{t=1}^T\sum_{l=1}^d \sqrt{\nu_{t,l}}$.
**Q2**: Can the assumption on noise be weakened to more general conditions mentioned in [1]?
**A2**: Thanks for asking. A brief answer is that the proof can be extended to the more general noise assumption "$E \Vert O_f(w,z) -\nabla f(w) \Vert^2 \le \sigma_1^2 \Vert \nabla f(w) \Vert^2+\sigma_0^2$" (called "affine noise variance assumption" in [Faw et al., 2022]), but we are not sure if it can be further generalized under the noise assumption "$E \Vert O_f(w,z) -\nabla f(w) \Vert^2 \le \sigma_2^2 (f(w)-f^*)+\sigma_1^2 \Vert \nabla f(w) \Vert^2+\sigma_0^2$".
Below we sketch how to extend the proof under the affine noise variance assumption (**we will provide detailed proof in the appendix of the revised paper**). The core difference of the proofs under different noise assumptions is the way to estimate the first-order term in Stage I. We leverage an auxiliary function $\xi_t=\sum_{l=1}^d\frac{\vert G_{t,l} \vert^2 }{\sqrt{\tilde{\nu_{t,l}}}}$ to handle the affine variance noise, where $\tilde{\nu_t}$ is defined as $\tilde{\nu_t}=\beta_2\nu_{t-1}+(1-\beta_2) \sigma_0^2$.
For simplicity, we focus on the case where $\beta_1=0$, i.e., no momentum. To disentangle the stochasticity between $g_t$ and $\nu_t$, we use $\tilde{\nu_t}=\beta_2\nu_{t-1}+(1-\beta_2) \sigma_0^2$ to approximate $\nu_t$ and thus the first-order term can be written as $$E [\langle G_t,-\eta \frac{1}{\sqrt{\nu_t}}\odot g_t \rangle]=-\eta E [\langle G_t, \frac{1}{\sqrt{\tilde{\nu}_t}}\odot G_t \rangle]+E [\langle G_t,\eta (\frac{1}{\sqrt{\tilde{\nu}_t}}-\frac{1}{\sqrt{\nu_t}})\odot g_t \rangle].$$ The first term on the right-hand side is the main term (which is negative) and we need to tackle the first term. By leveraging a similar routine as the proof of Lemma 1, we can arrive at the following estimation of the approximation error.
$$\textbf{Approximation Error} \le \frac{1}{4}\eta E [\langle G_t, \frac{1}{\sqrt{\tilde{\nu_t}}}\odot G_t \rangle]+O(\text{Second Order Term})+O(\sum_{l=1}^dE(1-\beta_2)\frac{ g_{t,l} ^2 G_{t,l}^2}{(\sqrt{\nu_{t,l}}+\sqrt{\tilde{\nu_{t,l}}})^2\sqrt{\tilde{\nu_{t,l}}}} ).$$
In the right-hand-side of the above equation, the last term takes close resemblance to $\frac{\xi_{t-1}}{\sqrt{\beta_2}}-\xi_t$ (after expansion), and indeed can be directly bounded by $O(\frac{\xi_{t-1}}{\sqrt{\beta_2}}-\xi_t)$ plus some error term by direct calculation (recall that $\xi_t=\sum_{l=1}^d\frac{\Vert G_{t,l} \Vert^2 }{\sqrt{\tilde{\nu_{t,l}}}}$). The sum of $O(\frac{\xi_{t-1}}{\sqrt{\beta_2}}-\xi_t)$ across t gives $(\frac{1}{\sqrt{\beta_2}}-1)O(\sum_{t=1}^T \xi_t)$, which is small than the sum of the main term in the first-order term $-\eta E [\langle G_t, \frac{1}{\sqrt{\tilde{\nu}_t}}\odot G_t \rangle]$ when $\beta_2$ is close to $1$, and thus we succeed to control the approximation error. The rest of the proof should be the same as under the original noise assumption.
---
Rebuttal Comment 1.1:
Title: Thank you for providing detailed rebuttal!
Comment: We will take this into account! | Rebuttal 1:
Rebuttal: We thank ACs, SACs, PCs, and reviewers for the efforts and time spent in handling our paper. According to the suggestions of Reviewer oCro and Reviewer b4JJ, we include several experiments to support our theoretical claims. The plots can be found in the attached pdf file. Specifically:
1. We run Adam with different schedulers of $\beta_2$ over a toy example proposed in [1];
2. We adapt a figure from [2], which train ResNet 18 over CIFAR-100 using Adam with different schedulers of $\beta_2$.
All of the experiments indicate that Adam with scheduler $\beta_2=1-\Theta(1/T)$ maintains the fastest convergence, which supports our Theorem 3.
**References**
[1]. Zhang et al., Adam Can Converge Without Any Modification On Update Rules, 2022
[2]. Zou et al., A Sufficient Condition for Convergences of Adam and RMSProp, 2018
Pdf: /pdf/126f24d8af57ee1bbde289492e99fd3fcc713ff3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Credal Marginal MAP | Accept (poster) | Summary: The paper studies algorithms for the marginal MAP (MMAP) problem in Credal networks that generalize Bayesian networks. The paper gives an overview of the problem, existing algorithms for Bayesian networks and generalizes the algorithms to credal networks. Overall, two exact and multiple heuristic approaches are presented. As the exact approaches were unable to deal with larger problems, the experiments are restricted to heuristic approaches.
Strengths: The paper is very well written and gives a good overview of the problem and related work. While Bayesian networks are an old topic, they remain relevant for probabilistic reasoning tasks that require analytical guarantees. However, one problem of Bayesian networks is that specifying the CPDs can be difficult. Credal networks are an interesting generalization that allows using probability intervals rather than point probabilities in order to capture the uncertainty about the encoded knowledge. To the best of my knowledge, there is not much literature on algorithms for Credal networks. The authors generalize state-of-the-art ideas for Bayesian networks to Credal networks and give an empirical evaluation. The paper is therefore an interesting contribution to the probabilistic reasoning literature. The code is attached in the supplementary material to reproduce the experimental results and is a useful resource for the probabilistic reasoning community.
Weaknesses: It could be discussed in more detail what the exact relationship between the proposed algorithms and the corresponding algorithms for BNs is. Are they generalizations in the sense that if the intervals in the Credal network are tight, then the Credal network algorithms correspond to the BN algorithms? Are there challenges in generalizing the algorithms to credal nets or is the generalization straightforward? And how do the runtime guarantees for Credal networks compare to those for Bayesian networks?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you elaborate on the relationship between CN and BN algorithms (see weaknesses)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
Regarding the relationship between the CN and BN algorithms, in principle, if the probability intervals in a CN are tight, namely they collapse to point probabilities, then the CMMAP task we defined for CNs collapses to the MMAP task for BNs. In this case, the CMBE algorithm, for example, will be identical to the MBE algorithm for MMAP in BNs because CMBE will no longer have to propagate sets of potentials.
However, in the more general case when we introduce probability intervals and/or more complex credal sets, the two tasks are different from a computational complexity point of view: MMAP for BNs is NP^PP-complete, whereas MMAP for CNs is NP^NP^PP-complete. Therefore, the corresponding MMAP algorithms will be significantly different (i.e., those for CNs vs those for BNs).
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanations. | Summary: This paper is about a generalisation of marginal MAP (MMAP) for Bayesian networks (BNs). The authors allow the BN parameters to vary in (credal ) sets. The goal is, therefore, to find the configuration with the maximum upper (wrt the credal sets) probability. The authors first consider exact inference. A number of schemes based on mini-buckets are then obtained to address approximate inference and empirically validated.
Strengths: The problem is important and very general (in a sense, most of the classical inferences in PGMs can be intended as a subcase of CMMMAP). This seems to be the first serious attempt to address the problem in the credal case and the experiments show how the proposed approximate schemes allows to solve a large number of non-trivial instances.
Weaknesses: The authors only consider maxi-max and maxi-min versions of the problem. Credal networks are often used to model a condition of indecision between multiple options, and considering other decision criteria possibly leading to multiple options (e.g., maximality or interval dominance) would be interesting. This is also related to the ideas sketched by De Bock et al. (Neurips 2014).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The ApproxLP scheme based on linearisation is used to address MAR in CNs. Can we extend it to MMAP and then perform a comparison against the methods proposed here? This point is partially addressed in Section 5.2, but the answer seems to be a bit inconclusive.
CMBE is giving an upper bound. Do the search methods give a lower bound? If so, is it never the case that the two results coincide, this meaning that the solution is exact?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I don't see significant sociated impacts for the present work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
We are already exploring several ideas around interval dominance as well as alternative criteria such as maximality and/or E-admissibility. Therefore, we are very thankful for your suggestions and we plan to address these issues in our future work.
Extending ApproxLP to CMMAP is indeed an interesting idea. We believe that in principle it is possible to do so. Therefore, it is an excellent suggestion of future work together with developing new, more efficient approximation schemes for CMMAP.
At the moment, the search based schemes can give a lower bound only if the approximate marginal inference algorithm used to evaluate the current solution is guaranteed to provide a lower bound on the probability of evidence (either the lower probability or the upper probability). And yes, in principle, if we have that the upper bound produced by CMBE is the same as the lower bound produced by another scheme then they both found the exact solution. Therefore, one direction of future work is to develop a scheme that computes in an anytime manner both lower and upper bounds on the CMMAP values.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I am happy to confirm my positive opinion about the paper. | Summary: This work presents novel algorithms for performing exact and approximate marginal MAP inference in credal networks with discrete-valued factors, and evaluates the computational and inferential effectiveness of these algorithms on a number of benchmarks.
Strengths: Firstly, I enjoyed reading the paper and think it’s fantastic how basic research into PGM inference is still being done. A strength of the paper is the technical depth demonstrated to devise these novel inference algorithms. I thought the design of the ablation study was solid and plenty of real-world experiments given.
Weaknesses: One weakness is that the explanation of the algorithms proposed is difficult to follow, but perhaps this is unavoidable with a heavily technical topic. I think it would strengthen the paper to explain the significance of marginal MAP in credal networks in particular, and performing experiments on counterfactual analysis. More discussion/experiments on counterfactual inference may strengthen the apparent contribution/impact of solving MMAP in credal networks.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Is the case where the credal network has arbitrary real-valued distributed variables, rather than all discrete-valued variables, important for any of the applications discussed?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Method seems only to apply to Bayesian Networks in the narrow sense of the term with discrete factors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
We will revise the presentation and will try to expand the discussion of the algorithms in order to address the concerns identified during the review. Certainly the content is very technical but we will do our best to make it more didactic.
We appreciate your suggestion to perform experiments on counterfactual analysis. We will definitely take it into consideration and try to formalize the connection between the CMMAP task and counterfactual analysis. We emphasize that causal reasoning is a clear application of credal networks (as explained in our Ref. [12]), a point that, we think, adds to the relevance of the paper to NeurIPS. More specifically, the estimation of distributions for exogenous variables from the observation of endogenous variables directly leads to credal networks, so our techniques can be directly applied to counterfactual reasoning given endogenous distributions and associated directed graphs that capture causal mechanisms (as in fact shown by the paper by Zaffalon et al).
Regarding your questions about real-valued variables, yes, we believe that it is almost always the case that in real-world applications many variables of interest are real-valued. Therefore, we will adopt the broad definition of Bayesian/credal networks as the focus on discrete factors is a feature of this paper but a larger perspective including continuous variables can be very important in applied work and may be a path for future research.
Furthermore, real-valued variables typically require developing different kinds of algorithms compared with the variable elimination and search based ones we proposed in this paper. Handling real-valued variables is also an important direction of our future work. | Summary: The paper proposes inference algorithms for credal networks for the marginal MAP inference task. The idea is to use variable elimination methods for this task. An exact inference algorithm is proposed as well as approximations using mini-bucket partitioning. Further, stochastic local search procedures combined with existing approximate marginal inference methods are proposed to solve the MMAP task. Experiments are performed on randomly generated credal networks as well as real-world Bayesian networks converted to credal networks.
Strengths: This looks to be the first work on MMAP for credal networks which seems significant since MMAP is a hard but important task for PGMs. The use of existing inference algorithms with the local search methods gives a general family of MMAP algorithms. The evaluation considers a large number of benchmarks. The paper is generally well-written with clear contributions.
Weaknesses: The relatively poor performance of CMBE may indicate it is hard to get more reliable performance for MMAP (e.g. by increasing i-bound of CMBE) for credal networks compared to approximations for other PGMs. The stochastic local search algorithms which seem to work much better in the experiments may be harder to trade-off w.r.t accuracy vs complexity.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What strategies could we use to control/trade-off accuracy vs complexity in the algorithms where the results show best performance (e.g. SA)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are no limitations explicitly mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions.
One way to address the tradeoff accuracy vs complexity in the case of algorithm CMBE for example is to also bound the size of the sets of potentials that are propagated during elimination (or approximate somehow these sets of potentials). This way we may be able to increase the i-bound (thus hoping to improve accuracy) while lowering to some extent the computational complexity of the elimination process. Clearly, we will need to expand the empirical evaluation of this proposed scheme; we hope our paper will open an interesting line of research exploring better bounds for this provably hard problem.
For the local search based algorithm we may be able to lower the computational effort by developing a faster, incremental scheme to compute the scores of all neighbours using the score of the current solution. At the moment, each neighbour’s score is computed from scratch using an approximate marginal inference algorithm and this negatively impacts the running time especially on larger problem instances. The comments by the reviewer do suggest exciting paths to be pursued.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your response and look forward to follow-on improvements to this work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Siamese Masked Autoencoders | Accept (oral) | Summary: In this paper, the authors propose a simple extension to Masked Autoencoders (MAE) to be able to pre-train on videos: SiamMAE. Two frames are sampled, independently encoded, and then asymmetrically masked. A transformer decoder is used to predict the missing patches in the masked image. The authors show that by masking a high proportion of patches (0.95) in the future frame and leaving the past frame unmasked, they are able to encourage the network to learn a more object-centric representation and focus on object motion rather than low-level image details. The authors show that this simple approach outperforms previous methods on many down-stream tasks and perform an extensive ablation to examine the architecture choices.
Strengths: Very well written paper. Some results are particularly impressive (e.g. more than 20% gain over VideoMAE)
Simple but effective method and the honesty of presenting this as-is (instead of disguising the method as being more complicated than it needs to be) I think should be appreciated.
Good comparison to other work and a great ablation section explaining many design choices
Weaknesses: Can only see two minor weaknesses:
1. The approach adds little over MAE and feels incremental, however it works very well and hasn’t been done before.
2. This approach relies on the temporal smoothness that is found in many curated datasets, such as Kinetics. However, for in-the-wild videos, with many sharp scene changes, this assumption becomes less likely to hold. However, it should be possible to split long videos into scenes in an unsupervised manner and then sample frames within a scene. It would be nice to a discussion on how this can be applied to large-scale internet datasets (which are becoming very popular for foundation models)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: With standard MAE, it is possible to also mask the input images for the downstream tasks to speed-up computation. It seems that this avenue for speed-up would not be possible with this current approach since the previous frame has to be unmasked?
Could the authors clarify the reason for not including temporal positional encoding (was this ablated?)
It would be interesting to see a bit more how this frame-based method compares to video-based methods on downstream tasks (for example more comparisons like VideoMAE). Perhaps focusing on compute efficiency?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. We address the reviewer concerns below:
>*Novelty over MAE*
We agree that our method is a simple modification of MAE albeit one which has not been explored in the past. We hope that the simplicity, efficacy and extensive empirical analysis of our method is a valuable contribution to the community.
>*Application to “in-the-wild” datasets*
Predicting the future frame based on the past frame to learn correspondence is effective because the two frames often have some degree of overlap. We consider a maximum frame gap of 48 frames, equivalent to 1.6s at 30fps. Over such short time horizons, temporal smoothness is a reasonable assumption. However, we agree that "in-the-wild" datasets, particularly egocentric datasets like Ego4D, might have frequent sharp scene changes due to head movements. Exploring these datasets is a promising future direction (refer to L298), especially when examining if the current strategy of random sampling remains effective. We will include this discussion in the revised text.
>*Masking input images for downstream tasks to speed-up computation*
To the best of our knowledge the masking strategy of MAEs is known to speed up the training process as the encoder acts on a small set of unmasked tokens. During inference generally the entire image is processed by the encoder [1, 2]. It is possible that some works have done inference on masked images however we are not aware of such works.
[1] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Li, Yanghao, et al. "Scaling language-image pre-training via masking." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
>*Reason for no temporal position encoding*
Temporal position embeddings (TPE) enable the network in distinguishing between tokens from distinct frames. In SiamMAE, the encoder processes each frame individually and the sole interaction between tokens of different frames occurs through the cross-attention layer in our cross-self decoder. Hence, the network does not need TPEs for achieving good performance. We validated this hypothesis by conducting the proposed ablation study, adding TPEs to the decoder. When using a joint decoder, TPEs improve the performance. Ideally, if position embeddings are redundant, the network should achieve similar performance with zero embedding weights. However, in practice we found that when using a cross-self decoder, the presence of TPEs hurts the performance.
| encoder | decoder | temporal pos. embed | J & Fm |
| ----------- | -------------- | :------------------: | --------- |
| siam | joint | ✓ | 57.3 |
| siam | joint | | 56.7 |
| siam | cross-self | ✓ | 57.6 |
| **siam** | **cross-self** | | **58.1** |
>*Comparison on additional downstream tasks*
We address this in our global response and re-iterate here for convenience.
We agree with the general sentiment of the comment, emphasizing the evaluation of a self-supervised representation learning method across a wide array of tasks. While we aim to demonstrate the versatility of our method, following prior work [12, 14, 16], we've limited our claims and experiments to validate our method's effectiveness at learning representations for visual correspondence. A key advantage of current evaluation strategy is its computational efficiency as it requires no training. Evaluation on video recognition benchmarks often involves fine tuning models for 300 epochs (as seen in VideoMAE, MAE-ST), requiring approximately 32-64 industrial-grade GPUs (such as V100, A100).
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for addressing the comments in my review. I feel you have adequately done so. | Summary: The paper proposes to use Siamese Masked Encoders for establishing correspondence for video input data. Uses the concept of predictive learning based on Masked Auto Encoder. Paper proposes to use asymmetric masking for present and future frames. Achieves best results in self-supervised setting for video label propagation tasks.
Strengths: Strengths:
1) The paper is well written for most of the parts and aptly elucidates the advantage of using a masked auto-encoder-based method for object-based correspondence.
2) Discusses in detail the architecture design choices for the encoder and decoder, ans uses a final design which is intuitive and simple and also focuses on relevant ablation studies.
3) The paper shows and achieves better results for object segmentation, part segmentation, and pose propagation tasks.
Weaknesses: Weakness:
1) Frame sampling: Table 3b discusses the effect of frame gap, however, it misses to comment on what could be the minimum frame overlap between the pair of frames for which predictions are out to be done. Apart from asymmetric masking, the degree of overlap between consecutive frames is also an important factor.
2) Fails to discuss the probable failure cases given the limitations from more qualitative results analysis belonging to various tasks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions:
1) Ablation experiment: Comparison with FrameMAE. In case of FrameMAE, When we do provide 2 frames to the joint encoder, a better comparison would be comparing the asymmetric masking with FrameMAE, since from Table 2 (b, d) results as presented does not seem too behind for FrameMAE.
3) Section 4.4: This subsection would be more aptly titled as "Futher Insights" rather Qualitative Results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Overall the idea presented in the paper is lucid and necessary details regarding reproducing the experiments are mentioned. I think the paper puts a right step in the direction of improving well know correspondence (object level) problem using predictive learning. However I feel what this paper lacks is drawing a complete comparison for an image overlap percentage based study. Presenting zero-shot results for occluded scenes and object parts and discussing failure cases (as seen from qualitative results obtained)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. We address the reviewer concerns below:
>*Frame sampling with overlap analysis*
To perform overlap analysis, we sampled video frames from the Kinetics-400 validation set with the specified frame gap and calculated two image similarity metrics: mean squared error (mse) and structural similarity index measure (ssim). We observed that either a very high overlap (low frame gap, high ssim, and low mse) or a low overlap (high frame gap, low ssim, and high mse) adversely affects performance.
Sampling with a frame gap of 16 or within a range of [4, 48] yields the best results. Interestingly, the overlap metrics for a frame gap of 16 and [4, 48] are comparable, suggesting that a particular degree of overlap is important for best results.
| frame gap | ssim | mse | J & Fm |
| ---------- | -------- | ------- | --------- |
| 4 | 0.6231 | 0.0230 | 56.7 |
| 8 | 0.5343 | 0.0360 | 57.8 |
| 16 | 0.4749 | 0.0480 | 58.0 |
| 32 | 0.4221 | 0.0597 | 56.3 |
| 4-48 | 0.4548 | 0.0528 | 58.1 |
>*Qualitative analysis & discussion of failures*
We've included a qualitative failure analysis in a file attached to the global response and will incorporate it into the paper.
We evaluate the quality of learnt representations using label propagation and consequently inherit its limitations. Specifically, the inference algorithm lacks semantic understanding, leading to globally inconsistent labels (refer to examples in the figure). This limitation can be overcome by fine tuning the learnt representations with task specific architectural changes. Additionally, there are instances where the inference process might miss intricate object details, like the spokes of a tire. While this shortcoming can be mitigated by using a smaller patch size during training and inference, it comes at a higher compute cost.
>*Comparing asymmetric masking with FrameMAE*
The results in Table 2b include the comparison suggested i.e. FrameMAE with asymmetric masking. We will update the text to clarify this. For completeness the table below compares relevant FrameMAE variations with SiamMAE. The combination of siamese encoder, a cross-self decoder with asymmetric masking works the best. In the table below (a) denotes asymmetric masking and (s) denotes symmetric masking.
| encoder | decoder | mask ratio | J & Fm |
| ----------- | -------------- | ------------ | --------- |
| joint | joint | 0.50 (s) | 51.8 |
| joint | joint | 0.75 (s) | 55.4 |
| joint | joint | 0.90 (s) | 51.9 |
| joint | joint | 0.95 (a) | 49.7 |
| **siam** | **cross-self** | **0.95 (a)** | **58.1** |
>*Change title of section 4.4 to Further Insights*
We will update the text to incorporate the suggestion.
---
Rebuttal 2:
Title: Reply to Authors
Comment: Most of my concerns has been answered. I would expect authors to incorporate some of the changes pointed out (and accepted) by authors to be reflected in the final version of the paper. I would like to maintain my ratings. | Summary: * This paper propose Siamese Masked Autoencoders for learning visual correspondence from videos called SiamMAE.
* SiamMAE randomly sample a pair of video frames and randomly mask 95% of patches of the future frame, and the pair of video frames are passed into visual encoder(VIT), and cross attention decoder to reconstruct the target.
* The authors conduct several experiments on downstream tasks(vos, human pose propagation), showing superior performance.
Strengths: * The paper presents a simple, yet highly effective method for the challenging video self-supervised framework.
* The proposed method is well motivated and intuitive with excellent performance.
* The ablation is sufficient and the writing is excellent.
Weaknesses: * Experiments on video recognition experiments should be reported.
* Although SiamMAE is pretraining on video frames, i still think the authors should conduct experiments on image downstream tasks(such coco detection, segmentation) to show image representation capacity.
* In Table1, The previous video ssl method(VPS) is also pretrained on Kinetics with ResNet50,however,the performance of SiamMAE with ViT-S/8 is similar in DAVIS & VIP & JHMDB, any explanation for this?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: please refer to the strength and weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: no limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. We address the reviewer concerns below:
>*Experiments on video and image recognition*
We address this in our global response and re-iterate here for convenience.
We agree with the general sentiment of the comment, emphasizing the evaluation of a self-supervised representation learning method across a wide array of tasks. While we aim to demonstrate the versatility of our method, following prior work [12, 14, 16], we've limited our claims and experiments to validate our method's effectiveness at learning representations for visual correspondence. A key advantage of current evaluation strategy is its computational efficiency as it requires no training. Evaluation on video recognition benchmarks often involves fine tuning models for 300 epochs (as seen in VideoMAE, MAE-ST), requiring approximately 32-64 industrial-grade GPUs (such as V100, A100).
>*VFS performance*
VFS is a state-of-the-art contrastive self-supervised representation learning method for visual correspondence. VFS learns representations by maximizing similarity across different frames from the same video. This training objective is directly aligned with how the downstream performance is measured i.e., by calculating the similarity of patches for label propagation.
In this work, our goal was to develop a predictive learning method which can match or outperform contrastive learning approaches for learning visual correspondence. Historically, the performance of predictive learning methods has trailed behind contrastive self-supervised learning approaches.
This can be attributed to the training objective of predictive learning methods like SiamMAE, which focuses on low level pixel details and is not directly aligned with the downstream evaluation procedure.
Despite this shared disadvantage, we significantly outperform prior predictive learning based methods (+22.7 improvement over VideoMAE). Finally, we achieve our stated goal as the ViT-S/8 model improves over VFS by +2.5 J&Fm on DAVIS, +2.7 mIOU on VIP, and +1.0 PCK\@0.1 on JHMDB.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, I have no further questions and will keep my initial rating. | Summary: This paper focuses on the self-supervised learning for video representations. The proposed SiamMAE operates on pairs of randomly sampled video frames and asymmetrically masks them, and then predicts the missing patches for visual representation learning. SiamMAE achieves significant performance and outperforms state-of-the-art self-supervised methods on video object segmentation, pose keypoint propagation, and semantic part propagation tasks.
Strengths: 1 The motivation is clear and strong.
2 The proposed asymmetric masking and cross-self decoder are effective and achieve good performance.
Weaknesses: 1 This paper mainly evaluates the proposed method on tracking problems. How is the performance on video classification tasks, such as UCF101 and HMDB51?
2 In Table 2(c), grid mask achieves better performance than random mask with 0.5 mask ratio. Increasing the mask ratio will improve the performance of random masking. How is the performance when increasing the mask ratio for grid masking?
3 This paper only investigates the ViT-S backbone. It is better to also leverage larger models, such as ViT-B, to verify the effectiveness of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My concerns mainly lie in the experiments.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. We address the reviewer concerns below:
>*Results on video classification tasks*
We address this in our global response and re-iterate here for convenience.
We agree with the general sentiment of the comment, emphasizing the evaluation of a self-supervised representation learning method across a wide array of tasks. While we aim to demonstrate the versatility of our method, following prior work [12, 14, 16], we've limited our claims and experiments to validate our method's effectiveness at learning representations for visual correspondence. A key advantage of current evaluation strategy is its computational efficiency as it requires no training. Evaluation on video recognition benchmarks often involves fine tuning models for 300 epochs (as seen in VideoMAE, MAE-ST), requiring approximately 32-64 industrial-grade GPUs (such as V100, A100).
>*Role of grid masking*
As suggested, we conducted additional ablations for grid masking with different masking ratios. We find that the performance increases when we increase the grid masking ratio to 0.75. However, it decreases when we further increase the masking ratio to 0.95. An advantage of grid masking is that the masking pattern encourages the network to exploit spatio-temporal correlations. However, with a very high masking ratio the network can no longer rely on temporal correlations, leading to worse performance.
All the settings for the results below follow Table 2c.
| mask ratio | pattern | J & Fm |
| --------------- | ------------ | --------- |
| 0.50 (s) | grid | 48.2 |
| 0.75 (s) | grid | 53.8 |
| 0.95 (s) | grid | 49.0 |
| **0.95 (a)** | **random** | **58.1** |
>*Results on larger backbones ViT-B*
We address this in our global response and re-iterate here for convenience.
We note that ViT-S has approximately the same number of parameters as ResNet-50, enabling us to compare our method across a diverse set of baselines. Moreover, ResNet-50 is the largest backbone explored by prior work [12, 14, 16], all of which aimed at improving representations for correspondence. However, we agree that a systematic investigation into the scalability of our method in terms of model size (refer to L297) would be valuable. Practically speaking, we are constrained by resources. For instance, training a ViT-S/8 model on 8 Titan RTX GPUs for 2000 epochs requires approximately 16 days. Given the compute and time constraints, we trained a ViT-B/16 model for 400 epochs.
| Model | J & Fm |
| ---------- | --------- |
| ViT-S/16 | 58.1 |
| ViT-B/16 | 58.6 |
The improvement in performance is encouraging. To contextualize the magnitude of the improvement, we note that scaling DINO from ViT-S/16 to ViT-B/16 results in an improvement of 0.5 J&Fm.
---
Rebuttal Comment 1.1:
Comment: Most of my concerns are addressed. However, I still feel it is important to demonstrate the effectiveness of the method in the video classification task. HMDB51 is a small-scale dataset with only 3.5k training clips. Evaluating the method on this dataset will not introduce too much computational cost but can further verify the results.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer
Comment: Thank you for your comments and suggestions. We hope to include some results on activity recognition benchmarks in the final version. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their thoughtful and constructive feedback. We are glad that **all** the reviewers found our method simple, clever and intuitive, our analysis and ablations to be thorough and convincing, and results impressive. The main aim of this rebuttal is to improve this work further by incorporating reviewer suggestions and comments. Specifically, we have conducted all additional ablations and experiments suggested by the reviewers. Here we comment on the feasibility and scope of two suggested avenues of improvement and provide some encouraging preliminary results.
>*Training larger backbones like ViT-B & ViT-L*
We note that ViT-S has approximately the same number of parameters as ResNet-50, enabling us to compare our method across a diverse set of baselines. Moreover, ResNet-50 is the largest backbone explored by prior work [12, 14, 16], all of which aimed at improving representations for correspondence. However, we agree that a systematic investigation into the scalability of our method in terms of model size (refer to L297) would be valuable. Practically speaking, we are constrained by resources. For instance, training a ViT-S/8 model on 8 Titan RTX GPUs for 2000 epochs requires approximately 16 days. Given the compute and time constraints, we trained a ViT-B/16 model for 400 epochs.
| Model | J & Fm |
| ---------- | --------- |
| ViT-S/16 | 58.1 |
| ViT-B/16 | 58.6 |
The improvement in performance is encouraging. To contextualize the magnitude of the improvement, we note that scaling DINO from ViT-S/16 to ViT-B/16 results in an improvement of 0.5 J&Fm.
>*Evaluation on video action recognition and image recognition tasks*
We agree with the general sentiment of the comment, emphasizing the evaluation of a self-supervised representation learning method across a wide array of tasks. While we aim to demonstrate the versatility of our method, following prior work [12, 14, 16], we've limited our claims and experiments to validate our method's effectiveness at learning representations for visual correspondence. A key advantage of current evaluation strategy is its computational efficiency as it requires no training. Evaluation on video recognition benchmarks often involves fine tuning models for 300 epochs (as seen in VideoMAE, MAE-ST), requiring approximately 32-64 industrial-grade GPUs (such as V100, A100).
Pdf: /pdf/30f07fb099e6a8d970f1cf13c0d2fb2e424f7f3f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers the problem of using self-supervised learning from video to obtain a representation that is well-suited to the task of estimating correspondence between a pair of images. They propose a significant modification of the MAE training procedure which is adapted for estimating correspondence: one image is not masked at all while the other has most (90%+) of its tokens masked, and the same (i.e. a siamese) encoder is applied to both. This is designed to require the model to internally establish correspondence. The model is trained for pixel prediction on Kinetics-400 and evaluated on several propagation tasks (object mask, part masks, human pose) using kNN inference to establish a dense correspondence field. SiamMAE is shown to greatly outperform existing self-supervised learning procedures with comparable backbone architectures, including methods trained on video. Ablative experiments confirm the importance of combining a siamese encoder with asymmetric masking. Visualization of the attention maps show that the model pays strong attention to object boundaries, seemingly a novel attribute.
Strengths: 1. Good motivation and contextualization with respect to past work.
1. The modification of the MAE procedure is simple but clever, and manages to extract much more information for correspondence from video than past methods.
1. Visualization of predicted images is quite impressive (Figure 2), despite the main goal being to learn a feature extractor for correspondence.
1. Comprehensive evaluation with 3 different tasks and wide selection of relevant baselines.
1. Ablative experiments verify the importance of each component of the design.
1. Hyper-parameters provided for reproducibility.
Weaknesses: 1. I'm not sure about the emphasis on predicting the _future_. It seems that the temporal order could be reversed (i.e. predict the past given the future) or randomized and I would expect similar results. Has this already been tested?
1. It wasn't abundantly clear how the patch-patch similarity was obtained. It seems to be taken from the cross-attention values within the decoder (line 239). However, this could be more clear since the decoder may contain multiple cross-attention layers with each having multiple heads?
1. It wasn't clear how k-NN and the queue were used to perform propagation. This should be explained in more detail or a reference provided.
1. No code provided at this stage.
1. Lack of confidence intervals (not a major issue - delta is quite large in most cases).
1. No evaluation of ViT-B and ViT-L (not a major issue - impressive results obtained with smaller model).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (Please also address or correct weaknesses above.)
1. What is the purpose of the [CLS] tokens? Is the model much less effective without it? (It is surprising that the attention masks for these tokens were so salient given that they are not involved directly in the loss.)
Suggestions:
1. I wonder whether it's possible to identify a general principle/strategy for preventing shortcut learning, of which this is just one instance?
Minor edits: (no need to respond)
1. Bold values in Tables 2d and 3b seem incorrect (e.g. 58.4 > 58.1, 56.7 > 56.5).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Additional limitation:
1. Even though Kinetics-400 was used without labels, it is possible that its image distribution is quite similar to that of the downstream tasks. It would be good to discuss this potentialityu, and say that the impact of dataset similarity has not been investigated?
I do not see any negative societal impacts stemming from this individual paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and suggestions. We address the reviewer concerns below:
>*Emphasis on predicting the future*
We agree with the reviewer that reversing the temporal order should not significantly alter the results. We conducted two additional ablation studies: one where we always predict the past frame (f1) and another where the order of frame prediction (f1 or f2) is randomized. All the settings for the results below follow Table 2.
| prediction target | J & Fm |
| ----------------- | --------- |
| f1 (past) | 57.5 |
| random [f1, f2] | 57.8 |
| **f2 (future)** | **58.1** |
All settings perform reasonably well, with our default setting (i.e., predicting the future) performing the best. We emphasize predicting future behavior due to its natural alignment with most real-world applications, which often necessitate the anticipation or prediction of agents' future behavior. We will update the text with this discussion and ablation.
>*Patch-patch similarity calculation*
We follow prior work on representation learning (e.g. MAE) and use the output of the encoder for calculating patch-patch similarity. The decoder is only used during pre-training. We will update the text to clarify the same.
>*Evaluation methodology*
Our evaluation methodology follows prior work and we provide a reference for the same in the text (please see L-189, references 14-16). For completeness we provide a short description here.
All evaluation tasks are cast as video label propagation, where the goal is to predict labels for each pixel in the target frames of a video, using only the ground-truth of the initial frame (i.e., the source). We measure the cosine similarity of each pixel, or patch, in the target frame with all the patches within its spatial neighborhood from the preceding m frames. The label assignment is then based on the labels of the top-k patches that have the highest similarity. Please note, the term 'queue' in this context refers to the usage of predicted labels from the past m frames.
>*Code release*
We will release the code and pre-trained checkpoints upon acceptance. For reproducibility we build on the open source implementation of MAE and provide all the relevant hyper parameters in the appendix.
>*Confidence intervals*
Our inference procedure is fully deterministic given a pre-trained model. We confirmed the same by running our inference with 5 different seeds for our ViT-S/16 models trained for 2000 epochs.
| Model | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 |
| -------- | --------- | --------- | --------- | --------- | --------- |
| ViT-S/16 | 62.0 | 62.0 | 62.0 | 62.0 | 62.0 |
>*Larger models ViT-B & ViT-L*
We address this in our global response and re-iterate here for convenience.
We note that ViT-S has approximately the same number of parameters as ResNet-50, enabling us to compare our method across a diverse set of baselines. Moreover, ResNet-50 is the largest backbone explored by prior work [12, 14, 16], all of which aimed at improving representations for correspondence. However, we agree that a systematic investigation into the scalability of our method in terms of model size (refer to L297) would be valuable. Practically speaking, we are constrained by resources. For instance, training a ViT-S/8 model on 8 Titan RTX GPUs for 2000 epochs requires approximately 16 days. Given the compute and time constraints, we trained a ViT-B/16 model for 400 epochs.
| Model | J & Fm |
| ---------- | --------- |
| ViT-S/16 | 58.1 |
| ViT-B/16 | 58.6 |
The improvement in performance is encouraging. To contextualize the magnitude of the improvement, we note that scaling DINO from ViT-S/16 to ViT-B/16 results in an improvement of 0.5 J&Fm.
>*Role of [CLS] token*
Following the original ViT paper which appends the [CLS] token during supervised training and uses the output corresponding to the [CLS] token for predicting class labels, almost all follow up work on representation learning using ViTs has followed this practice. Here, the role of [CLS] token is similar and is typically used for evaluation of learnt representation via linear probing. MAEs also included [CLS] token, likely to maintain consistency with existing literature and evaluations. In SiamMAE, much like in MAEs, the [CLS] token doesn't play any role during pre-training. This design choice was inherited from both ViT and MAE. Our ablation study shows that while the [CLS] token isn't crucial for achieving good results, omitting it slightly hurts the performance.
| [CLS] token | J & Fm |
| ------------ | --------- |
| | 57.5 |
| ✓ | **58.1** |
>*General principle/strategy for preventing shortcut learning*
We agree that a general strategy of preventing shortcut learning would be great and is indeed an open question in the field of representation learning for computer vision.
>*Impact of dataset similarity has not been investigated*
We agree that the role of dataset similarity between the train and test tasks has not been studied and will add this limitation in the text. However, we would like to point out that most of the prior work we compare with (UVC, VFS, MAE-ST and VideoMAE) were trained on the same dataset i.e. Kinetics-400.
---
Rebuttal Comment 1.1:
Comment: > Confidence intervals
I was referring to CIs for the distribution induced by the random variables in the training procedure (shuffling of training set, initialisation of model parameters), not inference. It would be best if these could be obtained from at least 3 trials for the final version.
> [CLS] token
Thank you for running the experiment without a [CLS] token.
I understand that the [CLS] token may be useful in intermediate layers as a kind of position-agnostic "place" to accumulate global information, but it's a little unclear to me why the attention maps for the final layer would be meaningful if no loss is applied to the [CLS] token (Section 4.4). Is it possible to shed some light on this? Perhaps because the other tokens "use" the global information without necessarily "adding to it" in the final layer?
This reminds me - are the query and key projections symmetric (i.e. $W_Q = W_K$) in the self-attention layers?
> Impact of dataset similarity not investigated
Thank you for acknowledging this.
> Larger models
Thank you for running initial experiments.
> Future vs past
Thank you for running this experiment; I hope it can be included in the final version.
**Overall**
The paper is intuitive, well-motivated and well-written. The authors have used the rebuttal to strengthen the paper. I do not have any major concerns and keep my initial positive rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Comment
Comment: Thank you for your comments and suggestions. We address the reviewer concerns below:
>Confidence intervals
Ideally, we would like to run all our experiments/ablations multiple times. However, we note that this is not a common practice in representation learning literature due high compute requirements per experiment. Nevertheless, we will re-run our base ablation setting 3 times to quantify the variance (if any) and include it in the revised version.
>[CLS] token
We don’t have a good understanding of why the attention maps for the [CLS] token are meaningful. Exploring this is an interesting research question which we leave to future work.
>Are the query and key projections symmetric?
No, we don’t use the same query and key weights. | null | null | null | null | null | null |
Balance, Imbalance, and Rebalance: Understanding Robust Overfitting from a Minimax Game Perspective | Accept (poster) | Summary: This paper explains the phenomenon of robust overfitting in adversarial training from a minimax game perspective. The author considers AT as a minimax game between the model trainer and the attacker, pointing out the imbalance between them leads to the network memorizing non-robust features, causing robust overfitting. Based on these explanations, the author proposes several measures to rebalance the minimax game, thereby mitigating robust overfitting and improving adversarial robustness.
Strengths: 1. It is interesting to study AT from a minimax game perspective.
2. The paper proposes multiple measures to alleviate robust overfitting and enhance adversarial robustness.
Weaknesses: 1. The motivation is unclear. The author's explanation of the robust overfitting process is based on some observation-driven analysis, which are difficult to be convincing. For example, the attacker injects non-robust features for misclassification, and the cause of robust overfitting is the network's memorization of non-robust features. What exactly are the false non-robust mapping and the falsely memorized non-robust features? Can the authors use the intuitive and precise statement to explain the mechanism of robust overfitting?
2. The method's novelty is limited. The author claims that previous attempts to change attacker strength have not focused much on robust overfitting. However, there is existing research in this area:
Yu C, Zhou D, Shen L, et al. Strength-Adaptive Adversarial Training, arXiv preprint arXiv:2210.01288, 2022.
3. The experimental results are incomplete and not significant. 1) Did the author confirm that the robustness of MLCAT in Table 1 is lower than AWP? 2) The author introduces multiple measures to mitigate robust overfitting and reports their combined performance. However, what are the individual performances of each technique? Considering that even combining multiple existing techniques for robust overfitting mitigation can further improve robustness, it is necessary to report the experimental results of each individual technique applied to Standard AT.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Please refer to comments in weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer kwEx for your time and efforts in reviewing this work. Below, we address your main concerns of this work.
---
**Q1.** What exactly are the false non-robust mapping and the falsely memorized non-robust features? Can the authors use the intuitive and precise statement to explain the mechanism of robust overfitting?
**A1.** To address your concerns, we add both intuitive illustrations and formal definitions below. We will append these discussions in the revision to make it clearer.
**1) Intuitive Understandings**
Following your suggestion, we plot an intuitive figure Fig A (in Rebuttal PDF) to illustrate each individual stage during the robust overfitting. Due to the space limit, please refer to A2 of Reviewer 55GS for detailed descriptions.
**2) Precise Definitions**
In this paper, we mainly follow Ilyas et al’s definitions of robust and non-robust features, and extract the features following the same procedure. Thus, “non-robust mapping” and “non-robust features” refer to the same thing, because Ilyas et al define **robust/non-robust features are defined as mappings** $f:X\to \mathbb{R}$ (Sec 2). The key difference is that they only extract features only from the final model, while we study features learned at different training stages.
To state it precisely, a training configuration $t\in\mathcal{T}$ (LR, epoch, etc) specifies a hypothesis class of features/mappings that it can robustly fit on training data, i.e.,
$H_t=${$f_\theta:X\to R \|$ $E_{x, y\sim D_{train}}\inf_{\|x-\bar x\|\leq\varepsilon}[y \cdot f_\theta(x)]>0$ $\ (f_{\theta}$ is robust on training data), and $\theta\in{\Theta}$ is attainable under the configuration $t$}.
We say a non-robust feature $f$ is falsely memorized if $f\notin H_{before}, f\in H_{after}$, where $H_{before},H_{after}$ refer to the hypothesis classes before and after LR decay, respectively. In other words, the non-robust $f$ is falsely memorized by the model (as a robust feature) under the after-LR-decay trainer. This mismatch could happen because the trainer only sees training examples and it can overfit them under an imbalanced minimax game. Since this feature is essentially a non-robust feature (in the population sense), this feature still behaves non-robust on test data, and even introduce shortcuts to test-time attack (verified in Sec 3.2.2).
**3) Empirical Evidence**
Notably, our explanation of robust overfitting is further justified by four extensive experiments in Section 3.2.2, where **we extract non-robust features from models before and after LR decay following Ilyas et al.’s procedure**, and compare their influence on robustness (Verification 1), target-class information (Verification 2), class correlation (Verification 3), and class symmetry (Verification 4). The phenomena fully support our dynamic game perspective of AT.
---
**Q2.** The method's novelty is limited. The author claims that previous attempts to change attacker strength have not focused much on robust overfitting. However, there is existing research in this area (Yu et al., SAAT).
**A2.** Thank you for pointing out this recent work, and we will add it for discussion in revision. Like ours, SAAT also considered stronger attack as a way to counter robust overfitting. However, we remark that there are several key differences:
- **Different perspectives.** The **main focus/contribution in our work is the minimax game understanding of robust overfitting (Sec 3)**. The three strategies are naturally motivated by our understanding to rebalance the minimax game. Instead, the design of SAAT is motivated directly by the influence of the perturbation budget on robustness disparity. Therefore, the two works clearly have different motivations and design principles.
- **Different solutions.** Notably, stronger attack is **only one of the three strategies proposed in our work** (Sec 4), and it is **optional** and **NOT adopted in the final ReBAT method** (bootstrap+small decay) because it degrades clean accuracy a lot. Besides, ReBAT attains **very good accuracy and robustness *without stronger attack*, and it can still outperform AWP+SAAT** (e.g., clean 79.49 v.s. 81.86, AA 49.29 v.s. 51.22) with neglectable overfitting.
- **Different strategies for adjusting attacker.** The two are also different in the proposed stronger attack. SAAT gradually increases $\varepsilon$ along training and adjusts adaptively for different samples. Instead, ours is much simpler and easy to use: **we only apply larger perturbations after LR decay to counter stronger trainer, and use a constant perturbation budget for all samples** (Sec 4.2)**.** As shown in Table B in the attached PDF, this simple strategy achieves comparable or even better performance than SAAT (78.17 v.s. 76.37 natural acc, 50.99 v.s. 48.86 best AA, 47.66 v.s. 47.17 last AA), and still brings slight better robustness when combined with other methods like AWP and ReBAT.
Thus, according to these key differences, our method is still quite different from SAAT and provides new perspectives and solutions for understanding and alleviating robust overfitting.
---
**Q3.** Did the author confirm that the robustness of MLCAT (Yu et al.) in Table 1 is lower than AWP?
**A3.** Indeed, there is a transcription mistake in the WideResNet-34 results of MLCAT in Table 1. Thanks for pointing it out. We add the correct results in Table A (Rebuttal PDF) and will fix them in the revision.
---
**Q4.** What are the individual performances of each technique?
**A4.** We note that we have provided a detailed analysis of each technique in **Figure 5 (Sec 4)** and included the quantitative results in **Appendix C**. We further summarize the results in Table C in Rebuttal PDF. We can see that each technique is useful for mitigating RO and combining them leads to better performance.
---
Hope our elaborations and new results above could address your concerns. Please let us know if there is more to clarify.
---
Rebuttal 2:
Title: Could you please have a look at our rebuttal?
Comment: Dear Reviewer kwEx, thanks for your time reviewing our paper. We have meticulously prepared a detailed response addressing the concerns you raised. Could you please have a look to see if there are further questions? Your invaluable input is greatly appreciated. Thank you once again, and we hope you have a wonderful day!
---
Rebuttal 3:
Title: Please take a look at author response and let us know if your opinion has changed.
Comment: Thank you.
---
Rebuttal Comment 3.1:
Comment: I thank the authors for their response. However, I remain unconvinced by the explanation provided for the mechanism behind robust overfitting:
1. If adversarial attacks are intended to introduce new non-robust features, why doesn't the network learn the easily learnable non-robust features from the dataset, similar to how it happens in standard training?
2. Why does the model need to memorize non-robust features? Because these are non-robust features, why wouldn't attackers try to counteract them rather than letting the model memorize them?
---
Reply to Comment 3.1.1:
Title: Further Response to Reviewer kwEx
Comment: Thanks for your reply! We will address your remaining concerns below:
---
**Q1.** If adversarial attacks are intended to introduce new non-robust features, why doesn't the network learn the easily learnable non-robust features from the dataset, similar to how it happens in standard training (ST)?
**A1.** In our analysis, we follow the well-known robust/non-robust feature framework for understanding adversarial training [1,2], where the attacker creates misclassification by perturbing non-robust features, and the model tries to eliminate them. Built upon this framework, our minimax game perspective gives **a more practical characterization of AT dynamics**. Ideally, if a classifier only learns robust features during AT (as analyzed in [1]), then it will be perfectly robust (i.e., clean acc = robust acc). However, this does not hold in practice, since we always observe a large gap between accuracy and robustness. Thus, **practical AT models indeed learn useful but non-robust features during training**. And as a result, the attacker can use these non-robust features to generate misclassified adversarial samples, as we always observed in practice.
On the other side, unlike ST, **learning more non-robust features will face countermeasures by the attacker in AT, so the trainer cannot memorize non-robust features as easily as ST**. For example, for a non-robust feature $f(x)$ assigned to class $y$, the attacker can add this feature to another example $x’$ from the $y’(\neq y)$ class to misclassify it, and the new adversarial example $(x’_{adv},y’)$ contains the feature $f(x)$ but it is now assigned to class $y’$. If the model further memorizes this new example, it will destroy the original $f(x)\to y$ feature (and this new feature will also be destoried in the next update). Thus, due to this adversarial effect, **the model will not be able to memorize all non-robust features generated by the attacker.**
Notably, if the attacker and the model trainer are properly balanced (e.g., befor LR decay), in the long run, the two players can strick a balance, where the model can memorize no more non-robust features generated by the attacker, **as we analyzed and verified in** **Sec 3.1**. In this case, the model is not perfectly robust and the attacker can also generate adversarial examples, yet the two still maintain a roughly constant level of robustness (Fig 1a).
For more empirical evidence, we refer to the discussions in Sec 3.1. We will add these explanations in the revision for better understanding.
**Reference:**
[1] Tsipras, et al. Robustness may be at odds with accuracy. ICLR. 2019.
[2] Ilyas et al. Adversarial examples are not bugs, they are features. NeurIPS 2019.
---
**Q2.** Why does the model need to memorize non-robust features? Because these are non-robust features, why wouldn't attackers try to counteract them rather than letting the model memorize them?
**A2.** We will address your questions point by point.
> Why does the model need to memorize non-robust features?
>
As the model trainer’s objective is $\min_\theta\ell_{\rm CE}(x_{\rm adv},y),$ its goal is to increase training accuracy on adversarial examples $x_{\rm adv}$. As non-robust features contained in $x_{adv}$ are useful for classification and thus help decrease training loss, **the model itself wants to memorize non-robust features whenever possible**. However, due to the countermeasures of the attacker, **how much this goal can be fulfilled depends on the relative strength between the attacker and the trainer**.
> Because these are non-robust features, why wouldn't attackers try to counteract them rather than letting the model memorize them?
>
As we elaborated in **A1** above, if the trainer and the attacker are balanced, the model is unable to memorize all non-robust features because the attacker can constantly craft countermeasures.
However, as we analyzed in Sec 3.2, the trainer and the attacker become imbalanced after LR decay, and endowed with smaller LR, **the stronger trainer can now draw a more complex region that robustly memorizes this non-robust feature on training set**. In this case, **the relatively weak attacker can no longer craft enough countermeasure.** As a result, the model memorizes more and more non-robust features and the attacker cannot counteract them, **which is directly revealed in the dramatic increase of training robust accuracy after LR decay (Fig 1)**. Besides, we have verified how this imbalance leads to the memorization of non-robust features in **Sec 3.1 (Fig 2a)**, and we further analyze how this memorization leads to RO with extensive experiments in **Sec 3.2.**
---
Hope the explanations above could address your concerns. Please let us know if there is more to clarify. We are happy to take your further questions before the discussion stage ends! | Summary: This paper empirically shows that robust overfitting is caused by the over-memorization of the non-robust features after learning rate decay. To mitigate the issue of robust overfitting, the authors propose to use a stronger training attack, a smaller learning rate decay rate, and a bootstrapped adversarial training loss. The comprehensive empirical results validate the effectiveness of the proposed method in mitigating robust overfitting and even improving robustness.
Strengths: 1. This paper provides comprehensive verifications for their proposed reason for robust overfitting. The authors clearly show the effect of the false memorization of the adversarial non-robust features after learning decay in robust overfitting.
2. The empirical experiments on various datasets and networks are comprehensive. The results support the authors’ claim.
Weaknesses: 1. Besides extensive empirical results, it would be better for the authors to provide some analyses from a theoretical perspective (possibly using game theory which could be related to the empirical results in this paper).
2. The proposed method could hurt the natural test accuracy to some extent.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the effect of the hyper-parameter \lambda?
2. Could you provide the result evaluated on CIFAR-10 in the experiments part? I am afraid that I did not find the corresponding results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The proposed method can effectively mitigate the issue of robust overfitting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer EtSp for appreciating the comprehensiveness and solidness of the verification of our understanding. Below, we address your main concerns.
---
**Q1.** It would be better for the authors to provide some analyses from a theoretical perspective (possibly using game theory which could be related to the empirical results in this paper).
**A1.** There is a body of literature that analyzes AT from a game theory perspective, see [1-5]. However, existing theory papers only analyze AT’s Nash equilibrium under toy models (e.g., Gaussian features and linear models), and **static** minimax players. **None of the existing literature considers a dynamic game and could explain robust overfitting from a game perspective, particularly in practical AT algorithms, as done in our work.** Nevertheless, we find some interesting connections and new insights between them that worth noting.
**Connections.** Under these simplified assumptions, prior works show that although the Nash equilibrium of the AT objective exists and is robust, **the current alternating optimization of AT may fail to converge to the Nash equilibrium**, see, e.g., a recent work [5]. The key reason is that the trainer can falsely fit the non-robust features [5], which is in a similar spirit to our analysis that robust overfitting is caused by the falsely memorized non-robust features after LR decay. **This shows that our explanation is not only in line with the cutting-edge theory of AT, but also further explains robust overfitting from a game perspective for the first time.**
**New insights.** We also notice that existing AT game theory papers cannot explain robust overfitting, and from the perspective of our theory, this is possible because they overlook the dynamic change of the AT players. Thus, our understanding of AT from a dynamic game can inspire more in-depth theoretical characterization of AT from the game theory perspective.
As our work mainly focuses on the understanding robust overfitting of practical AT algorithms, and it is generally hard to theoretically analyze the training dynamics of practical DNNs, we leave more theoretical investigations under strong assumptions to future work. Following your suggestion, we will add this discussion on the relation to game theory in the revision.
**References:**
[1] Bulo et al. Randomized prediction games for adversarial machine learning. TNNLS, 2016.
[2] Pal & Vidal. A game theoretic analysis of additive adversarial attacks and defenses. NeurIPS, 2020.
[3] Bose et al. Adversarial example games. NeurIPS, 2020.
[4] Pinot et al. Randomization matters how to defend against strong adversarial attacks. ICML, 2020.
[5] Balcan et al. Nash Equilibria and Pitfalls of Adversarial Training in Adversarial Robustness Games. AISTATS, 2023.
---
**Q2.** The proposed method could hurt natural accuracy to some extent.
**A2.** Due to the well-known trade-off between accuracy and robustness, better robustness often comes with lower accuracy. Nevertheless, Table 1 (quoted below) shows that our ReBAT version achieves **both better accuracy and robustness** **than PGD-AT, TRADES, and AWP** on CIFAR-10. The ReBAT[strong] version utilizes stronger attacker, so it scarifies accuracy to trade for higher robustness. **Table 8 (Appendix C.4)** shows that further incorporating the knowledge distillation (KD) technique can further boost the clean accuracy of ReBAT while maintaining high robustness. Therefore, ReBAT achieves a good trade-off between accuracy and robustness as it can attain good clean accuracy while significantly improve model robustness.
*Performance of best-epoch models on CIFAR-10 with PreActResNet-18. (Quoted from Table 1)*
| Method | Natural Accuracy | AutoAttack |
| --- | --- | --- |
| PGD-AT | 81.62 | 47.51 |
| TRADES | 80.45 | 48.09 |
| AWP | 81.11 | 50.09 |
| ReBAT (ours) | 81.86 | 51.13 |
| ReBAT[strong] (ours) | 78.71 | 51.49 |
| ReBAT + KD (ours) (Table 8) | 84.20 | 50.72 |
---
**Q3.** Effect of the hyper-parameter $\lambda$.
**A3.** Here, $\lambda$ controls the strength of the KL regularization, and a larger $\lambda$ imposes stronger regularization on model flatness. In **Table 7 in Appendix C.2**, we include a detailed analysis of the influence of $\lambda$. We quote some results below. We can see that under a small decay rate ($d=1.5$), a medium $\lambda$ attains the best performance, while too large $\lambda$ hurts both natural accuracy and robustness severely with too much flat regularization. When the decay rate is larger, e.g., $d=4$, a larger $\lambda$ can significantly improve model robustness by regularizing the model’s fitting ability, but meanwhile, its natural accuracy also degrades. In view of this analysis, in practice, we simply choose a small decay factor $d=1.5$ and a medium $\lambda=1.0$ as it can attain both good accuracy and robustness.
*Analysis of the decay factor ($d$) and regularization strength ($\lambda$) on CIFAR-10 with PreActResNet-18 (best-epoch results). (Quoted from Table 7)*
| $d$ | $\lambda$ | Natural Accuracy | AutoAttack |
| --- | --- | --- | --- |
| 1.5 | 0.0 | 81.90 | 50.81 |
| | 1.0 | 81.86 | 51.13 |
| | 4.0 | 79.68 | 50.52 |
| 4.0 | 0.0 | 83.05 | 50.38 |
| | 1.0 | 82.46 | 50.87 |
| | 4.0 | 80.99 | 51.00 |
---
**Q4.** Missing CIFAR-10 results in the experiments part.
**A4.** This is caused by a typo. Table 1 actually is the CIFAR-10 results with two different backbones. The correct caption should be “Comparing our method with several training methods on CIFAR-10 under the perturbation norm $ε_\infty = 8/255$ based on the PreActResNet-18 and WideResNet-34-10 architectures.” We will correct it in the revision. Thanks for pointing it out.
---
Hope our elaborations and new results above could address your concerns. Please let us know if there is more to clarify.
---
Rebuttal Comment 1.1:
Title: My concerns are solved
Comment: Thanks for your replies. It is impressive to see the proposed method can improve both natural and robust test accuracy. Besides, it seems that the proposed method provides some new insights regarding explaining robust overfitting, which would be beneficial to future research in this area. Therefore, I still lean toward Weak Acceptance. | Summary: This paper studies the robust overfitting phenomenon in adversarial training. Meanwhile, this paper focuses on a specific problem “the robust overfitting occurs when we use learning rate decay techniques.” This paper proposes a game perspective to explain the robust overfitting. It claims that the robust overfitting happens because of the imbalance between the attacker and the model after the LR decay. Moreover, this paper proposes ReBAT method to improve the robustness and mitigates the overfitting phenomenon.
Strengths: This paper studies a specific but interesting problem, the robust overfitting phenomenon. This paper explains that the overfitting happens because of the learned mapping of the non-robust features after LR decay.
This paper provides an intersting perspective to help us to understand the cause of the robust ovetfitting. In particular, the robust problem is a min-max problem, which could be regarded as a game. The robust overfitting is because of the breaks of the original equilibrium.
This paper provides extensive verification to support its explanation, which addresses most of my questions when I first review this paper.
Weaknesses: 1.For the Figure 2 (b), I guess the red line denotes w/ LR decay and the blue line denotes w/o LR decay;
2.For section 3.2.1, the authors share an interesting and important insight. I’m afraid it is a little trifling. Could the author provide a figure to demonstrate it?
3.In section 4.1, the authors provide some techniques to address the robust overfitting, e.g., bootstrapping, small decay factor. From the theoretical analysis in this paper, I guess there may exist other methods to achieve the balanced learning of robust and non-robust feature, and more discussions is helpful. Could the author provide some theoretical analysis about the proposed method for the re-balance?
4.I find the proposed method ReBAT may cause more computation overhead to get a convergent result. Could the author discusses the limitations of this work?
5.Could the author share more discussions about the game idea? For example, in multi-task learning, could it be regarded as a game, where there are multiple players?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 55GS for your careful reading and appreciation on the novelty and solidness of our work. Below, we address your main concerns of this work.
---
**Q1.** For Figure 2 (b), I guess the red line denotes w/ LR decay and the blue line denotes w/o LR decay.
**A1.** Thank you for pointing it out. You are correct. We will correct it in the revision.
---
**Q2.** For section 3.2.1, the authors share an interesting and important insight. Could the author provide a figure to demonstrate it?
**A2.** Following your suggestion, we plot an intuitive figure Fig A (Rebuttal PDF) to illustrate each individual stage during the robust overfitting. Specifically, Fig A (a) shows that before LR decay it achieves a balance between the trainer and the attacker, and the weak trainer will NOT fit the non-robust features in misclassified adversarial training examples (blue→orange). Fig (b) shows that after LR decay, the trainer endowed with stronger fitting ability memorizes the adversarial examples to their original labels (blue) by drawing a more complex decision boundary (from a feature view, the non-robust features are falsely memorized altogether). Fig (c) shows that this new decision boundary introduces easy-to-reach shortcuts to attack test examples as it falsely maps y’-class features (the orange region) to the y class. As a result, the training robustness rises but test robustness degrades. This is in line with our analysis in Sec 3.2.1 that after decay, the falsely memorized non-robust features open shortcuts for test-time attack and lead to robust overfitting.
---
**Q3.** From the theoretical analysis in this paper, I guess there may exist other methods to achieve the balanced learning of robust and non-robust feature, and more discussions are helpful. Could the author provide some theoretical analysis about the proposed method for the re-balance?
**A3.** Indeed, there are other kinds of strategies to restore the balance of minimax training, including some tricks proposed in prior works. We have provided an overview of these strategies from our theoretical perspective in **Section 5**, where we explain how existing techniques (10+) can help restore the balance from three different aspects: data regularization, training regularization, and stronger attacker.
Here, we provide a more detailed explanation of the proposed methods (bootstrap, small decay, and stronger attack). As shown in the theory of Section 3.2, robust overfitting rises when the model trainer memorizes false non-robust features contained in adversarial examples. Therefore, there are two ways to alleviate this effect, either to regularize the trainer (cannot fit), or to strengthen the non-robust features (cannot be fitted). The bootstrap regularization enforces a flat landscape of model weights (simpler decision boundary), such that the non-robust features cannot be memorized via drawing complex decision boundaries. Similarly, the small LR decay factor regularizes the optimizer’s local fitting ability, which also prevents memorizing the non-robust features. Stronger attacker, on the other side, makes the non-robust features too strong to be memorized.
Hope the explanations above helps address your questions and we will add them in the revision.
---
**Q4.** I find the proposed method ReBAT may cause more computation overhead to get a convergent result.
**A4.** To ease your concerns, we calculate the per-epoch training cost of different methods. From the table below, we can see that the per-epoch training cost of ReBAT is almost the same as vanilla AT (129.4s v.s. 125.3s), lower than many other advanced AT methods (e.g., AWP). Thus, in this view, ReBAT is rather computationally efficient.
As you mentioned and can be observed from the training process in Figure B (Rebuttal PDF), ReBAT takes slightly longer training time to converge to the best performance (it peaks at around the 170th epoch while AT peaks at around the 110th epoch), because it adopts smaller LR decay and regularization to avoid robust overfitting. **But it achieves much higher robustness and does not suffer from robustness deterioration.** Further considering that in practice, people usually train longer in AT (typically 200 epochs) to attain better accuracy and use early stopping to select the best checkpoint, the two methods actually require similar total training epochs in practice.
Therefore, we believe that the computation cost of ReBAT is comparable to vanilla AT while attaining much better robustness.
*Training time comparison on CIFAR-10 with PreActResNet-18 backbone.*
| Method | Training Cost (per epoch) |
| --- | --- |
| PGD-AT | 125.3 |
| AWP | 135.6 |
| ReBAT | 129.4 |
---
**Q5.** Could the author share more discussions about the game idea?
**A5.** There is a body of literature that analyzes AT using game theory, e.g., [1], usually regarding AT as a zero-sum game between two players. A recent work argues that AT should be treated as a non-zero-sum game [2]. For multi-objective AT, there could exist multiple attackers that constitute a game of multiple players, where the AT could behave very differently.
Due to word limit, please refer to **A1 in our response to Reviewer EtSp** for deeper discussions on the connections between our work and those papers and new insights our work brings to the game idea. Existing game theory papers mainly study AT’s Nash equilibrium under toy models and **static minimax players**, but not **a dynamic game in practical AT algorithms** as in our understandings in robust overfitting. We will elaborate this part in the revision.
**References:**
[1] Balcan et al. Nash Equilibria and Pitfalls of Adversarial Training in Adversarial Robustness Games. AISTATS, 2023.
[2] Robey et al. Adversarial Training Should Be Cast as a Non-Zero-Sum Game. *arXiv preprint arXiv:2306.11035.* 2023.
---
Hope our elaborations and new results above could address your concerns. Please let us know if there is more to clarify.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for the comprehensive explanation.
Your clarification has effectively addressed my concerns. The game idea presents a compelling direction within adversarial robustness. I believe such insights could potentially influence developments in other areas as well. I'm inclined to adjust my score to a 7. | Summary: This paper investigates the phenomenon of robust overfitting in adversarial training and explains it from a minimax game perspective. The authors analyze how the decay of the learning rate disrupts the balance between the model trainer and the attacker, leading to robust overfitting. They propose a method called ReBalanced Adversarial Training (ReBAT) to mitigate robust overfitting and achieve good robustness even after long training.
Strengths: 1 The paper is well organized and easy to follow.
2 The paper provides a holistic understanding of robust overfitting in adversarial training by analyzing the imbalance between the model trainer and the attacker from a minimax game perspective. This perspective helps explain why robust overfitting occurs in adversarial training and why it does not occur in other training methods.
3 The experiments cover different network architectures and benchmark datasets, providing strong empirical evidence.
Weaknesses: 1 it is better to clearly state the diffference between the defined robust/non-robust features and the previous one in [15]
2 ReBAT[strong] seems like a strange notation, please consider to change one.
3 The caption of Table 1 seems not correct?
4 Does the minimax game view provide any insights on the accuracy and robustness tradeoff?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the above weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Missing...
Please stately explicitly the limitations and potential negative impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Znmp for your careful reading and for appreciating the proposed understanding and method for robust overfitting. Below, we address your main concerns of this work.
---
**Q1.** Diffference between the defined robust/non-robust features and the previous one in Ilyas et al.
**A1.** In this paper, we **follow Ilyas et al’s definitions of robust and non-robust features**, and extract the features following the same procedure. The main difference is that, instead of focusing only on the last trained model (as in Ilyas et al), we study **the dynamic behaviors of AT models on utilizing these robust/non-robust features during the training process.** Specifically, we extract non-robust features from models obtained before and after LR decay, and use these features to verify our understanding of robust overfitting from a dynamic minimax game perspective (Sec 3.2).
---
**Q2.** ReBAT[strong] seems like a strange notation, please consider to change one.
**A2.** Here, ReBAT[strong] refers to the combination of ReBAT with stronger attack. **ReBAT++** could be a good alternative name since this version further enhances adversarial robustness over ReBAT. Please let us know if you have better options.
---
**Q3.** The caption of Table 1 seems not correct?
**A3.** Thanks for pointing it out. There is a typo here. Table 1 shows CIFAR-10 results with different backbones. We will change the caption to “Comparing our method with several training methods on CIFAR-10 under the perturbation norm $ε_\infty = 8/255$ based on the PreActResNet-18 and WideResNet-34-10 architectures.”
---
**Q4.** Insights on the accuracy and robustness trade-off from our view of minimax game.
**A4.** The minimax view can also provide new insights into understanding the accuracy-robustness trade-off. As we discussed in the paper, during AT, $\mathcal{A}$ attacks by injecting non-robust features into adversarial examples.
- When the minimax game is **balanced**, these non-robust features will not be fitted (i.e., discarded). Since these non-robust features also contribute to classification accuracy (see Ilyas et al.), discarding more non-robust features (i.e., more robust) will lead to worse accuracy.
- When the minimax is **imbalanced** and the model trainer is stronger than the attacker (as studied in our paper), the model will capture more and more non-robust features and get better accuracy, while at the cost of introducing shortcuts to the attacker and leading to worse and worse robustness (revealed in Section 3.2).
Therefore, in both balanced and imbalanced minimax game, there will be a trade-off between accuracy and robustness, and the discrepancy is larger when robust overfitting happens.
---
**Q5.** Please state explicitly the limitations and potential negative impact.
**A5.** Thanks for your suggestion. A main limitation of the proposed method is that as we regularize the model trainer to restore the balance of the minimax game, the training takes slightly longer to attain optimal performance, e.g., 110 epochs (AT) and 170 epochs (ours). However, in practice, people usually train longer in AT (typically 200 epochs) to attain better accuracy, and use early stopping to select the best checkpoint. As our method does not need early stopping techniques (without robust overfitting), and have neglectable computation overhead per epoch, the total training time is comparable with vanilla AT.
As for the social impact, since our method is designed for better defence of adversarial attack, it should have a positive impact by enhancing the security of machine learning models.
---
Hope our elaborations above could address your concerns. Please let us know if there is more to clarify. | Rebuttal 1:
Rebuttal: The Rebuttal PDF can be seen in the attached file, which contains
- Figure A: an intuitive illustration for the proposed understanding of robust overfitting;
- Figure B: a plot of the training process comparing ReBAT with vanilla AT;
- Table A, B, C: additional comparison experiments between different AT methods.
Pdf: /pdf/1419c71e03f448448390b775a7e824070d33ef0a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Error Discovery By Clustering Influence Embeddings | Accept (poster) | Summary: This work presents a method for discovering subsets of the test set of a multi-class classification task on which a trained model incorrectly classifies a large portion due to the same root cause. The method uses a factorized low-rank approximation of a bilinear influence function, parameterized using the Hessian of loss function with respect to the parameters of the classifier, to project high dimensional influence explanations into lower dimensional representations of test examples, naturally called influence embeddings. The influence embeddings preserve structure of high dimensional influence explanation space and are used to cluster the test examples into subsets using k-means. The influence embeddings are also shown to have purpose in finding slices of the test set that satisfy certain size and failure properties and can be used to find the most problematic training examples causing the failures. Empirical results show that the proposed method outperforms other slice discovery methods on benchmarks and is able to find known errors in several settings.
Strengths: * The problem is important and well-motivated
* The general method is easy to understand and seems scalable
* Empirical evidence is strong and shows the method is very suitable for the specified problem
* The paper as a whole is fairly well-written
Weaknesses: * Given the work of Schioppa et al. [2022], the methodological contribution seems quite small and straight-forward
* The clustering algorithms and procedures using clustering as a subroutine (InfEmbed-Rule) seem to treat the clustering algorithm as an after-thought. There is no justification for why k-means is used.
* The theoretical results are slightly over-stated in the introduction. It seems like more can be done on the theoretical side to motivate the method. Lemma 1 and Section 3.5 contain interesting results that could be strengthened and elaborated to improve the justification for the presented method.
* The writing in the background section could be improved a bit. It seems like notation is introduced in an unnatural order and in a slightly imprecise way. Also the definition of the Hessian (line 98) is unclear: which examples are used to define the Hessian? Training examples? Test examples? Or maybe both?
* Figure 1 really is not that helpful in understanding the method and takes up quite a bit of space.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Why is k-means the clustering algorithm of choice? It seems like specifying the number of slices could hinder the methods effectiveness? DBSCAN was used in an ablation and seems like a better (maybe not the best) choice since it is density based.
* Why is a hierarchical clustering (tree or DAG shaped clustering) not used for InfEmbed-Rule? Why do we need to re-cluster using k-means to find slices with user defined properties?
* How is the subset of the training set passed to FactorHessian chosen? (Section 4, lines 229-230)
* How would this slice discovery method be used to improve the model?
* How does the quality of the low-rank approximation of the influence function affect the quality of slices discovered?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The choice of influence function seems to be chosen by fiat and not really justified. The entire method hinges on the specified influence function being the best. It does seem to be used in the sub-community, but is not justified by the authors. I think the limitations of the method with respect to the issues with the choice of influence function should be addressed.
Is it possible that this method could be used to cause harm in some way. Could the authors address this with respect to the method presented?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your concerns below.
**Weaknesses**
- **Methodological Contribution**: We agree that the actual computation of the influence embeddings being clustered in the main algorithm is builds on the work of Schioppa et al. [2022]. However, we make 3 key contributions:
1. Our work is the first to connect one popular problem in model debugging (slice discovery), with a popular tool in model debugging (influence functions).
2. We theoretically derive what the representation to cluster in a slice discovery algorithm should be in terms of an influence function-based definition of coherency. This is significant, given that past slice discovery algorithms choose to cluster various representations (CLIP embeddings, last-layer representations etc) without theoretically explaining those representations should enable effective slice discovery.
3. We are the first to use influence functions (of any kind, including but not limited to Schioppa et al) for global explainability (simultaneously explaining predictions for the entire test dataset), unlike past work, which solely focused on local explainability (explaining one prediction at a time, independent of other predictions).
- **Why K-Means**: Our core contribution is identifying what representation should be clustered for slice discovery, and not how to cluster those representations. Furthermore, we tested 3 clustering algorithms (DBSCAN, spectral clustering, gaussian mixture model) on the Spotcheck benchmark, and found that 1) performance was not sensitive to the choice of clustering algorithm and 2) no matter what clustering algorithm was used on influence embeddings, the performance beat the Domino and PlaneSpot baselines. These results were unfortunately hidden in Appendix J, and will be moved to the main body.
- **Theoretical Results**: Thank you for the suggestion. We will rewrite the presentation of the theoretical results. The slice discovery field has to-date been without much theoretical underpinning, and this submission hopes to be a first step towards addressing that gap. In this work, we focus on showing intuition for why simply using K-Means to cluster influence embeddings implicitly leads to desirable properties that other slice discovery methods needed to explicitly encourage, suggesting that influence embeddings are the right representation to cluster.
- **Background Section**: Thank you for the writing feedback; we will rewrite to clarify that training examples are used to define the Hessian.
- **Figure 1**: Thank you for the feedback, we will update the figure to make the steps more clear.
**Questions**
- **Why K-Means (and not DBSCAN)?**: We definitely agree using DBSCAN and not having to choose the K, the number of slices, beforehand is a good idea that would make InfEmbed easier to use, especially since the various clustering algorithms performed similarly (Appendix J results). We will highlight this possibility in the main body.
- **Hierarchical Clustering**: InfEmbed-Rule can actually be viewed as a form of top-down hierarchical clustering (using K-means to subdivide existing clusters) with 1 important modification - we do not further cluster a cluster if it already has sufficiently low accuracy and large size. The reason for doing so is that for error analysis, we want to find the largest cluster with sufficiently low accuracy.
- **Factor Hessian**: We chose an uniformly random subset. We will add the detailed numbers in the Appendix.
- **How would this slice discovery method be used to improve the model**: This is a great question. Once we identify a high-error slice, we can identify the slice opponents - the examples whose inclusion in the training data increases the loss on the slice (see Section 3.6), remove them, and either retrain or fine-tune the model. If the examples in a high-error slice turn out to be mis-labeled (as was the case for some slices in the AGNews case study of Section 4.4), we can identify their proponents (training examples which supported the prediction for the wrong label) and remove them. Model improvement is a particularly important challenge, and interesting future work.
- **Quality of low-rank approximation**: The quality of the low-rank approximation can indeed affect the quality of the discovered slices. We found that a small Arnoldi dimension, rank, or size of training data subset used for the Hessian could lead to worse quantitative and qualitative results. However, we also found that following the recommendations by Schioppa et al. [2022] led to satisfactory results.
**Limitations**
- **Choice of influence function**: We chose to use influence functions because 1) they have been shown to be powerful model debugging tools (Koh and Liang, 2017, Han et al 2020, Kong et al 2020), 2) they provide actionable next steps for model improvement (remove opponents of examples in high-error slices and retrain), and 3) using them leads us to derive influence embeddings, which possess appealing properties when clustered. We will provide additional justification in the draft.
- **Harms**: As with every model diagnosis or understanding method, there is a risk of the method giving the practitioner a false sense of safety. If the diagnosis is wrong, and an action is taken based on the wrong diagnosis, this may have unintended consequences.
We thank you for the feedback, and would be happy to address any other questions that you have. We encourage you to reconsider your score in light of our response.
**References**\
[Koh and Liang, 2017]: Understanding black-box predictions using influence functions. ICML 2017.\
[Han et al, 2020]: Explaining black-box prediction and unveiling data artifacts through influence functions. ACL 2020.\
[Kong et al, 2020]: Understanding instance-based interpretability of variational autoencoders. Neurips 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. Given the promised changes and responses, I am happy to update my score above. | Summary: The paper presents a method to discover groups of test examples on which the model performs badly, and the misclassification of the examples is caused by the same reason (defined as coherence).
This problem is known as slice discovery. The method leverages influence functions to compute the influence explanation for each test example. This generates a vector of the influence of each training example on the test example. Given that this vector is high-dimensional, they derive influence embeddings that are used to cluster them by applying K-Means. The authors propose a variant in which there is no need to specify the number of clusters but only the minimum size of the slice and the maximum accuracy. The method is evaluated on several datasets.
Strengths: - the paper is well written and structured clearly. The contributions are clearly reported, and the related works are described.
- the problem is relevant because finding the training examples that influence a group of test examples on which the model underperforms is the first step to debug a model and to fix sporious correlations in the data.
- using influence functions to address this problem is novel according to the provided references and interesting
- the author formalizes the coherence desideratum
- the metho is evaluated extensively on multiple datasets (text and images) and considering multiple types of causes that induce the model to make a classification error
Weaknesses: _Reproducibility_: it is not clear if the source code of the experiments will be made available upon acceptance. The datasets are publicly available.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - how has been selected the number of cluster K for each experiment with InfEmbed?
- I couldn't find the value of the hyperparameter P, D and K on dcbench and SpotCheck
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A very brief discussion of the limitations is addressed in the supplementary material. It would be useful for the reader to have the limitation section in the main text, even if there is limited space for this addition.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your concerns below.
**Weaknesses**
- **Reproducibility**: Regarding reproducibility, we plan to release all our code, datasets, and additional artifacts to replicate our analyses.
**Questions**
- **Choosing number of clusters K for InfEmbed experiments**: For K (number of clusters), we follow past work to ensure comparability of quantitative results: when comparing to Domino in Table 1, we follow Domino and use K=25, and when comparing to various methods on the SpotCheck benchmark in Table 2, we set the number of clusters K using the Bayesian information criterion (BIC), following how the SpotCheck benchmark authors chose K for their slice discovery method, PlaneSpot
- **Hyperparameters**: For K (number of slices), we follow past work to ensure comparability of quantitative results: when comparing to Domino in Table 1, we follow Domino and use K=25, and when comparing to various methods on the SpotCheck benchmark in Table 2, we set the number of clusters K using the Bayesian information criterion (BIC), following how the SpotCheck benchmark authors chose K for their slice discovery method, PlaneSpot. We will update the draft to include all these details.
We thank you for the feedback, and would be happy to address any other questions that you have.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the clarifications. I'm keeping my score as it is. | Summary: This paper proposes a heuristic clustering-based method for identifying errorneous groups of test examples. I'm erring on the side of caution here and go with a weak reject, but I have limited familiarity with the subfield.
AC note: score increased 4 -> 6 after rebuttal.
Strengths: * Error analysis tools are useful for many different ML systems; better algorithms for discovering slices might have have significant practical impact.
* The paper is reasonably clearly written and easy to follow.
Weaknesses: * The main algorithm for estimating the seems to be taken from Schioppa et al.; without it, the methodological contribution of the paper seems quite limited in its nature.
* The paper seems rather simplistic in its approach and not very exciting in its results – cf. Table 2, which is the main evaluation where the technique from the paper is compared to the related work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What is the computational complexity of the method? Is the main computational bottleneck in the implicit Hessian estimation?
* Is there any intuition or an ablation study on the number of elements needed for the Hessian estimation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N\A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your concerns below.
**Weaknesses**
- **Methodological Contribution**: We agree that the actual computation of the influence embeddings being clustered in the main algorithm is a straightforward application of Schioppa et al. [2022]. However, we believe we make 3 key contributions, which we clarify here, and will add to the paper:
1. Our work is the first to connect one popular problem in model debugging (slice discovery), with a popular tool in model debugging (influence functions).
2. A main contribution is to theoretically derive what the representation to cluster in a slice discovery algorithm should be (influence embeddings), in terms of an influence function-based definition of coherency. This is significant, given that past slice discovery algorithms choose to cluster various representations (CLIP embeddings, last-layer representations, last-layer representations projected by SCVIS) without theoretically explaining why clustering those representations should enable effective slice discovery. In doing so, we have improved over previously published slice discovery algorithms (e.g. methods that extracted CLIP embeddings or last-layer representations), where the actual computation of those representations did not require any methodological contribution.
3. We are the first to use influence functions (of any kind, including but not limited to Schioppa et al) for global explainability (simultaneously explaining predictions for the entire test dataset), unlike past work, which solely focused on local explainability (explaining one prediction at a time, independent of other predictions).
- **Simplicity and Results**: We agree with your take, the algorithm itself is simple; a benefit in practice. However, its development required theoretical insight that past slice discovery approaches missed. In addition, our results (**Table 1 and Table 2**) show that we outperform competing approaches. Lastly, across a series of case studies, we are able to use the scheme to identify underperforming slices in real-world tasks, which is an important challenge in practice. Slice discovery is a critical and challenging task, in practice; the proposed approach is a step towards developing reliable tools for solving that problem.
**Questions**
- **Computational Complexity**: Indeed, the main computational bottleneck is the implicit Hessian estimation. Although Schioppa et al. do not explicitly state their complexity, it is O(P), where P is the Arnoldi dimension. This is because each of the P steps in the Arnoldi iteration requires computing a Hessian-vector product, where the Hessian is typically computed on a subset of the training data for tractability. The complexity of computing influence embeddings is then exactly the same as that of the influence function (IF) implementation of Schioppa et al. We also wish to point out the following note in Schioppa et al.’s paper: “Empirically, IFs with Arnoldi iteration achieve speedups of 3-4 orders of magnitude over the LISSA-powered IFs (Koh and Liang 2017) and of 10x over TracIn (Pruthi et al. 2020), a heuristic gradient-only alternative to IFs, with better or similar accuracy.” For the K-means portion, the complexity is O(n_samples$\times$n_iterations$\times$k_clusters). In practice, the clustering step is near instantaneous, so the method is dominated by Hessian estimation.
- **Number of elements needed for hessian estimation**: We estimate the hessian with a random sampling (representative mini-batch) of the training set. In our experiments on Spotcheck and Domino benchmark, we performed ablations to estimate the number of critical samples and found that less than 5 percent of the training samples is needed for such settings. We will update the paper with detailed discussion of these experiments.
We thank you for the feedback, and would be happy to address any other questions that you have. We encourage you to reconsider your score in light of our response.
---
Rebuttal Comment 1.1:
Comment: It seems that other reviewers were more excited about paper's contribution than I was. I won't let my lack of enthusiasm hold the paper back.
Please add the complexity to the paper! Increasing my score to 6. | Summary: In this paper, the authors propose InfEmbed on the slice discovery problem. The method is derived from the influence function and surrogate embedding representations are proposed for reducing complexity. Overall, the paper is well written and the derivation of the method is reasonable. Some pros and cons are discussed as follows:
Strengths: The paper proposes a new slice discovery method, which is based on K-Means and the influence embedding the authors proposed. The method is well-designed and I believe is relatively easy for deployment. The overall derivation of the method is clear and theoretical analysis is supported.
The question that the authors are addressing is of importance and the experiment results seem to validate the performance of this method.
The proposed method, if it is effective like the paper states, can provide insights in other domains that require identifying groups of data.
Weaknesses: The K-means is sometimes unstable. As the authors claim the clustering algorithms can be others, they should also validate the results on more cluster algorithms such as the spherical k-means.
The result section is not organized well. It is recommended that the authors discuss datasets, baselines, and results in different subsections and provide more details of data description and baseline methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do the authors validate the performance of the method with different clustering algorithms?
How is the runtime complexity using influence embedding compared with directly using influence functions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some results are not well visualized and shown. It is recommened that the authors reorganize the result section to make it more readable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your concerns below.
**Weaknesses**
- **Trying other clustering algorithms**: We actually did try 3 other clustering algorithms (DBSCAN, spectral clustering, gaussian mixture model) on the Spotcheck benchmark, and found that no matter what clustering algorithm was used on influence embeddings, the performance beat the Domino and PlaneSpot baselines. These results were unfortunately hidden in Appendix J, and will be moved to the main body.
- **Organization of results section**: Thank you for the suggestion. We will update the submission to reorganize the results section, and also provide additional overview data and baseline method details.
**Questions**
- **Trying other clustering algorithms**: Please see the response to weaknesses bullet point 1.
- **Runtime complexity using influence embedding compared with directly using influence functions**: Directly using influence functions to generate the influence explanation of every test example, and then clustering them would be computationally impractical because for each test example, the length of its influence explanation is the same as the training data size (i.e. if the training data had 1M examples, each test example’s influence explanation would be 1M-dimensional and computing the influence of every training examples on the test example). In that sense, clustering influence embeddings (instead of influence explanations) is far more efficient, as the dimension of what is being clustered is much smaller.
We would be happy to answer any additional questions that you may have.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I will keep my current ratings. | Rebuttal 1:
Rebuttal: **General Response**\
We thank all the reviewers for their generous feedback, and for noting that the work addresses an important problem (**Reviewers 2vT2, uzoE, yzdi, MYaZ**), is novel (**Reviewers 2vT2, 6oL7**), provides a thorough empirical evaluation (**Reviewers 2vT2, 6oL7, yzdi**), and well-written (**Reviewers MYaZ, 6oL7**).
In addition to the point-by-point response to each reviewer, here, we address some general concerns:
- **Methodological contribution:** A few reviewers asked us to contrast with the closely related work of Schioppa et. al [2022]. We make the following key contributions:
1. We theoretically derive what the representation to cluster in a slice discovery algorithm should be—influence embeddings—in terms of an influence function-based definition of coherency. This is important because past slice discovery algorithms choose to cluster various representations (CLIP embeddings, last-layer representations, last-layer representations projected by SCVIS) without theoretically explaining why clustering those representations should enable effective slice discovery. In doing so, we have improved over previously published slice discovery algorithms (e.g. methods that extracted CLIP embeddings or last-layer representations).
2. Our work is the first to use influence functions for the slice discovery task. In addition, we are the first to use influence functions (of any kind, including but not limited to Schioppa et. al. 2022) for global explainability (simultaneously explaining predictions for the entire test dataset), unlike past work, which solely focused on local explainability (explaining one prediction at a time, independent of other predictions).
- **Experiments relating to coherence & label homogeneity**: We have now conducted additional experiments to compare the label homogeneity and coherence scores of InfEmbed compared to baselines in several settings. We find that Infembed returns clusters that have higher coherence scores and identifies low-performing clusters that are more label homogenous than baselines.
- **K-means & the importance of the clustering component of Inf-Embed**: In slice discovery, the goal is to partition a set of examples in groups on which a model has high performance versus low performance. So far, this task has been formulated as a search or clustering one. However, the key challenge in this area is to identify a suitable representation that would allow search or clustering algorithms to reliably find low-performing clusters. Consequently, our key contribution is to propose influence embeddings as such a representation for any clustering or search algorithm for slice discovery. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a method (InfEmbed) for discovering coherent slices of data such that the model fails on samples within a slice due to similar reasons. InfEmbed uses k-means to cluster a representation proposed in the work called influence embeddings, where samples with similar influence embeddings have similar influence explanations. The proposed slicing method outperforms prior work on standard slice discovery benchmarks.
Strengths: 1. The work attempts to solve an important problem of surfacing failure modes of a model automatically.
2. The work has sufficient technical novelty: it proposes influence embeddings, relating them to influence explanations and using them in the context of slice discovery.
3. The work does a thorough evaluation by evaluating the proposed method against multiple slice discovery benchmarks in addition to multiple case studies.
Weaknesses: My main concerns are around hyperparameter selection and missing analysis.
1. The work formalizes coherence (Equation 4), but does not compare to prior works (mainly Domino) in terms of coherence.
2. It is not clear how the method promotes label homogeneity when all gradients of the neural network are considered. The analysis in 3.5 works only when the gradients in the fully connected layer are considered.
3. For a practical use case, the approach still requires the users to determine and specify the branching factor $B$ and the maximum accuracy $A$ of a slice.
4. The work doesn’t provide discussion on the selection of hyperparameter choices: e.g. L231 - the values of P and D, which are used in FactorHessian and the number of clusters ($K$) for the results in Table 1.
5. It is unclear how the authors choose the number [L286] of layers for which gradients are considered. Is it based on the total number of parameters?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: My main concerns are around hyperparameter selection, missing details and analysis.
1. It would be great if the paper can provide error bars in Table 1 for both methods, as Figure 3 of Domino shows high standard deviations across the different settings. Also, it would be useful to report the total number of settings (out of 1235 settings), where InfEmbed outperforms Domino.
2. Can the authors report coherence scores (Equation 4) for different methods in Table 1?
3. Can authors measure the label homogeneity (entropy of predicted distributions) of InfEmbed and compare it with Domino?
4. Can authors provide guidelines on how a practitioner could select the branching factor $B$ and desired accuracy $K$ to get slices with “at most K% accuracy with at least $m$ samples”, which can surface all error modes ranked by accuracy? Currently it seems like the user may have to try a bunch of values to arrive at a reasonable set of slices.
5. What is the value of $K$ used for K-means for the results in Table 1 and Table 2. Does it match the values used by Domino?
6. Section 5: why are the influence embeddings expected to be real valued? In other words, why is $\lambda$ guaranteed to have positive values?
7. What do the practical values for $C$ in Lemma 1 [L582] look like? Can a theoretical bound be established on the values of $C$?
8. On L188: Why are norm terms treated as a constant?
9. Any reason for marking the reproducibility as N/A?
10. Can the performance be further improved if k-means is replaced with Domino’s error-aware mixture modeling that more directly promotes label homogeneity?
Typos:
* L42: “influence explanation, L
* L215: “whose influence on the slice is most harmful”
* L297: “Apriori one would not have guessed”
* L560: $H_\theta^{P - 1} b$
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We address your concerns below.
**Weaknesses**
- **Coherence scores and Experiments**: See discussion in the first bullet point of the next section.
- **How does InfEmbed promote label homogeneity?**: While the analysis in Section 3.5 only considered gradients in the last layer, it does also explain why label homogeneity is encouraged, even when considering gradients in all layers. This is because for two examples, the dot-product between their gradients in all layers is equal to the dot-product between their gradients in the last-layer, plus the dot-product between their gradients in the remaining layers, and the presence of the former encourages label homogeneity. We will add this clarification to the paper.
- **How to select B (branching factor) and A (minimum accuracy of a slice) when running InfEmbed-Rule**:
1) **Regarding B**: In all experiments, we used branching factor B=3. The rationale is that B should not be too large, to avoid unnecessarily dividing large slices with sufficient low accuracy into smaller slices. In practice, B=2 and B=3 did not give qualitatively different results for the case studies where InfEmbed-Rule was used.
2) **Regarding A**: The choice of A (seek slices with at most A% accuracy) is application specific, depending on the overall accuracy over the entire test data, and the level of accuracy tolerable for the application. We suggest starting with a low value of A, see if slices are discovered, and increase A if not. Fortunately, the clustering step is fast, so that this process can be done interactively.
- **Hyper-parameters (P, D, K)**: Thank you for raising this point. For the case studies, we used a consistent hyper-parameter scheme across the board, and for the baselines, we match the underlying approaches. We will update the draft with all of these hyper-parameters.
- **Choosing layers**: As is customary in the influence function literature, we choose the last layer. Note that we did not tune this choice for the case study, and it is possible to scale the approach to all parameters for the models we consider.
**Questions**
- **Error bars, Coherence Scores, and Label Homogeneity**: Thank you for these suggestions. Starting with the domino benchmark, we compute the coherence scores, which corresponds exactly to the K-means objective that we cluster, across all slices for influence embedding clustering as well as for domino. We find a correspondence between our current results (Table 1 in the main paper), and the coherence scores. In every setting where we outperform Domino (1219 out of 1235 trained settings), our coherence scores are better, in some cases, by almost 50 percent. Regarding label homogeneity: In high performing clusters, we find that influence embedding clustering has lower label homogeneity than domino. However, for error clusters we find the reverse. The goal of our scheme is to find clusters where the model is ‘wrong for the same reason’. We conjecture that influence embeddings are most effective for these settings. We will update Table 1 and 2 with error bars, label homogeneity scores, and coherence scores.
- **Choosing hyper-parameters B, A for InfEmbed-Rule**: Please see response to Weaknesses #3.
- **Choosing hyperparameter K**: For K (number of slices), we follow past work to ensure comparability of quantitative results: when comparing to Domino in Table 1, we follow Domino and use K=25, and when comparing to various methods on the SpotCheck benchmark in Table 2, we set the number of clusters K using the Bayesian information criterion (BIC), following how the SpotCheck benchmark authors chose K for their slice discovery method, PlaneSpot.
- **Why are the influence embeddings real-valued?** Thank you for pointing this out. In finding a low-rank approximation of the Hessian, we actually only use the top eigenvectors / eigenvalues for which the eigenvalues are positive. Effectively, this means we first project the Hessian to the closest (in terms of L2 norm) symmetric matrix that is positive definite, before finding its low-rank approximation. This means that in the algorithm FactorHessian, V, $\lambda** should actually be the top-D eigenvectors / eigenvalues for which the eigenvalues are positive. We will add this clarification to the paper.
- **Values of C from the Lemma**: C is the sum of the L2 norms of the influence explanations of the test data, and is thus large. However, given the strong quantitative results, and that examples within discovered slices tend to have the same opponents, the high value of this theoretical constant does not affect practical results.
- **Norms**: We apologize for the confusion - what that line is trying to say is that clustering to maximize intra-cluster dot-product similarity is equivalent to clustering to minimize intra-cluster Euclidean distance, because the objective function of the former is equal to the objective function of the latter plus a constant that does not depend on the specific clustering. Therefore, when considering when 2 examples would be placed in the same slice, we just need to consider their dot-product, which is easier to analyze than their Euclidean distance.
- **Why N/A for reproducibility**: This was a mistake. We will release all our code, datasets, and artifacts to replicate our analyses.
- **Error aware mixture modeling**: One of the advantages of our method is that by simply applying K-Means to the right representation, influence embeddings, it achieves the same goal as Domino’s error aware mixture model, i.e., the influence embeddings themselves are ‘error-aware’. Having said this, a mixture model can be applied on top of the influence embeddings, and it might be possible to endow the influence embeddings with other qualities. Studying the effects of using influence embeddings with an error aware-mixture model is an interesting future direction.
We would be happy to address additional questions.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I thank the authors for their response. I am updating my rating as I am satisfied with the coherence comparison to prior work and clarifications on hyperparameters. I strongly encourage the authors to include the remaining technical details in the paper/appendix. | null | null | null | null | null | null |
Operation-Level Early Stopping for Robustifying Differentiable NAS | Accept (poster) | Summary: This paper studies the robustness issue of DARTS from the perspective of overfitting. It uses gradient matching scores to measure the overfitting issues, and proposes an early-stop strategy to address the problem of saturated skip connections in normal DARTS. The proposed approach has been evaluated on a number of search spaces, showing comparable results to the state of the art.
Strengths: + The idea of using similarity between gradient directions of training vs validation batch makes sense.
+ Extensive experiments on various search spaces and benchmarks.
+ Decent results on DARTS C10 space comparing to SOTA.
Weaknesses: - Although the idea to use early stopping to robustify DARTS is interesting, the metric used in this paper is very similar to that in GM-NAS [1]. Therefore it seems to me that the novelty of this paper is somewhat discounted.
- It is good to see experiments on a number of search spaces and benchmarks. However, the results on DARTS S1-S4 space is missing.
- Some discussion on overfitting is not very clear. e.g. from Fig.1, it is difficult to see quantitively how the negative correlation between arch params and val loss: indeed the curvature of the lines are different, but it would be better to have some more rigid analysis.
[1] Generalizing Few-Shot NAS with Gradient Matching. Shoukang Hu*, Ruochen Wang*, Lanqing Hong, Zhenguo Li, Cho-Jui Hsieh, and Jiashi Feng. ICLR 2022.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * How the proposed approach performs on DARTS S1-S4?
* It seems this approach doesn't work so well on MobileNet space, any possible reasons for that?
* How this approach may compare to the train-free methods, like TE-NAS and ZiCo?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful and insightful review, which is very helpful for us to further improve this paper. Next, we will answer your questions one by one, and we hope this will improve your acceptance of the paper.
**W1.**
Thank you for pointing this out. We apologize for not providing a detailed explanation for GM-NAS in the related work due to space constraints. In reality, GM-NAS and our approach are entirely distinct lines of work. Please refer to the global rebuttal for further detail.
We hope our answers address your concerns, and we will consider adding the differences to our revised version.
**W2 \& Q1.**
Many thanks for the valuable advice. DARTS S1-S4 are not standard search spaces. They are simplified variations of the DARTS search space proposed by R-DARTS to investigate DARTS' failure mode. In our supplementary material, we further expanded these search spaces and presented DARTS' failure mode (both in terms of performance and the searched architectures) in these extended search spaces. To address your concerns, in the following table, we provide an additional performance comparison between OLES and other methods on DARTS S1-S4.
It is evident that OLES also consistently outperforms DARTS, as well as R-DARTS and DARTS-PT which are also dedicated to addressing the skip connection domination issue.
| Datasets | Spaces | DARTS | R-DARTS | DARTS-PT | OLES |
|---|---|---|---|---|---|
| CIFAR-10 | S1 | 3.28 | 3.31 | 2.79 | **2.76** |
| | S2 | 2.55 | 2.44 | 2.47 | **2.43** |
| | S3 | 3.69 | 3.56 | 2.64 | **2.57** |
| | S4 | 3.05 | 3.05 | 2.92 | **2.73** |
| CIFAR-100 | S1 | 26.05 | 22.24 | 23.16 | **22.10** |
| | S2 | 25.40 | 23.34 | 22.10 | 22.18 |
| | S3 | 24.70 | 21.94 | 20.80 | **20.70** |
| | S4 | 21.35 | 20.70 | 19.98 | **19.18** |
**W3.**
Thanks for the suggestion. In Figure 1, through the curves of validation losses and architecture parameters over search iterations, it is evident that there exists a negative correlation between validation loss and architecture parameter. To provide a more rigid analysis, we calculated the Kendall-$\tau$ correlations between validation loss and architecture parameter for each operation. The results are presented in the following table.
| Skip-Connections | Sep-Conv-3x3 | Sep-Conv-5x5 | Dil-Conv-3x |
|---|---|---|---|
| -0.36 | -0.59 | -0.73 | -0.72 |
It can be observed that there is a clear negative correlation between validation losses and architecture parameters, particularly for parametric operations, where the negative correlation is more noticeable.
Furthermore, we simplify the scenario by considering only one set of candidate operations $\{o_1, o_2, o_3\}$. Suppose their corresponding validation losses and architecture parameters are $\{l_1, l_2, l_3\}$ and $\{\alpha_1, \alpha_2, \alpha_3\}$, respectively. The overall validation loss can be expressed as:
$$
l = l_1 \cdot \frac{e^{\alpha_1}}{e^{\alpha_1} + e^{\alpha_2} + e^{\alpha_3}} + l_2 \cdot \frac{e^{\alpha_2}}{e^{\alpha_1} + e^{\alpha_2} + e^{\alpha_3}} + l_3 \cdot \frac{e^{\alpha_3}}{e^{\alpha_1} + e^{\alpha_2} + e^{\alpha_3}}
$$
then
$$
\frac{\partial l}{\partial \alpha_1} = \frac{(l_1 - l_2)e^{\alpha_1 \alpha_2} + (l_1 - l_3) e^{\alpha_1 \alpha_3}}{(e^{\alpha_1} + e^{\alpha_2} + e^{\alpha_3})^2}
$$
Hence, when $l_1$ is larger, the corresponding $\frac{\partial l}{\partial \alpha_1}$ will be larger as well, leading to a more significant decrease in $\alpha_1$ during gradient descent. Other operations are similar. Although this scenario is significantly straightforward, it illustrates some connections between validation losses and architecture parameters. We hope our answers address your concerns.
**Q2.**
Thanks for the comment. Indeed, in the MobileNet search space, our approach has achieved competitive performance compared to state-of-the-art methods, slightly trailing behind several more complex approaches. As the MobileNet search space is not naturally designed for DARTS, we had to make several modifications (ProxylessNAS has adapted to this search space, and FairDARTS did not open-source their search process on MobileNet). Therefore, we believe there is still room for optimization in the results presented. Here, the purpose of using the MobileNet search space is to demonstrate the effectiveness of our approach in different search spaces.
Furthermore, our approach is remarkably simple, involving only minimal modifications to the original DARTS, thus incurring negligible additional overhead. It is easy to enhance the performance of OLES by integrating it with other more advanced differentiable NAS algorithms, conducting more fine-tuning, and leveraging advanced training tricks.
**Q3.**
Many thanks for the suggestion. Train-free NAS utilizes proxy metrics to predict the test performance or ranking of architectures without the need for training. However, current train-free methods still slightly lag behind traditional NAS methods in terms of performance, especially in large search spaces.
In the following table, we compare OLES with TE-NAS and ZiCo. It is evident that OLES still outperforms train-free methods in most cases. Benefiting from the compact search space, train-free methods exhibit slightly superior performance on NAS-Bench-201. However, in large search spaces such as DARTS search space, OLES performs better than TE-NAS and ZiCo. Train-free NAS is a highly promising research direction, as it not only greatly enhances NAS efficiency but also promotes a deeper understanding of neural networks.
|| NAS-Bench-201 ||| DARTS Search Space |||
|---|---|---|---|---|---|---|
|||||| ImageNet |||
|| CIFAR-10 | CIFAR-100 | ImageNet-16-120 | CIFAR-10 | top-1 | top-5 |
| TE-NAS | 6.1 $\pm$ 0.47 | 28.76 $\pm$ 0.56 | 57.62 $\pm$ 0.46 | 2.63 $\pm$ 0.064 | 26.2 | 8.3 |
| ZiCo | 6.0 $\pm$ 0.4 | 28.9 $\pm$ 0.3 | 58.20 $\pm$ 0.3 | 2.45 $\pm$ 0.11 | - | - |
| OLES | 6.3 $\pm$ 0.15 | 29.60 $\pm$ 0.22 | **56.03 $\pm$ 0.38** | **2.41 $\pm$ 0.11** | **24.5** | **7.4** | | Summary: The authors are adressing the issue of converging to a degenerated solution (many skip connections) using DARTS. The authors connect this behavior to an overfitting to the train data. To remedy this issue, they suggest to apply early stopping based on the correlation of gradients during the architecture parameters training and the wekths parameters training.
Strengths: The paper is trying to tackle an interesting problem in an elegant way. The story is clearly stated and the introduced approach is a nice extension of the classical early stopping. The performance of this method is on par or better than many other more sophisticated (and to some extent complicated) method.
Weaknesses: * The authors are claiming that the dominance of skip connections is due to the quick overfitting of weighted operations to the train data. The whole story of the paper relies on this assumption which is unfortunately not sufficiently supported. While curves on figure 1(a) are supporting this claim, figures 3(a) and 3(b) are a bit confusing. I would have expected the test performance to drop for both cases.
* There is an issue regarding the reported numbers: the authors are claiming that their method benefits operations with parameters and the test error is dropping (as expected), I am wondering why the number of parameters is barely different.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Adressing the mentioned weaknesses appropriatly would lead to a higher rating
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 1 poor
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your considerate feedback. We sincerely appreciate your engagement in the review. Next, we will address your concerns one by one, and we hope this will improve your view of the paper.
**W1.**
Thanks a lot for your careful and insightful observation. In fact, in Figures 3(a) and 3(b), the number of iterations of DARTS has not yet reached the threshold where the test performance starts to drop, indicating that the overfitting issue has not yet appeared. In Figure 4 of our attached PDF, we have redrawn Figures 3(a) and 3(b) and extended the number of training iterations. Consequently, it becomes apparent that when the number of training iterations continuously increases (>50), DARTS exhibits a noticeable decline in test performance. Moreover, Figure 3(c) in the original paper further corroborates this finding, as the problem of performance collapse due to the skip connection domination issue more easily appears in NAS-Bench-201. As a result, it manifests the test performance declines at an earlier stage. We hope our answers address your concerns, and we will consider improving Figures 3(a) and 3(b) in the revised version.
**W2.**
Thanks for pointing this out. Our approach aims to address the issue of performance collapse caused by the abnormal aggregation of skip connections in differentiable NAS. NAS methods fundamentally seek the optimal model architecture. In reality, models with more parameters are not necessarily superior to those with fewer parameters. The phenomenon you described, i.e., that the number of parameters in OLES is barely different with other methods, actually serves to illustrate that OLES can resolve the issue of abnormal skip connection domination, rather than invariably benefit operations with parameters.
Furthermore, in our comparisons, we presented the optimal results of other methods (mostly from the original paper's reported results). These results were obtained before the occurrence of skip connection domination. Once the skip connection domination issue emerges, there will be a noticeable performance collapse. As shown in Figures 1&2 in our attached PDF and Section C.2 in our supplementary material and, DARTS and other methods like DARTS-PT encounter skip connection domination and performance collapse after training for longer epochs. Thus, there are no significant differences in the number of parameters.
We hope our answers address your concerns. Thanks again for this insightful comment.
---
Rebuttal Comment 1.1:
Comment: We would like to know if our responses have adequately addressed your concerns or if further clarification is needed. If you find it appropriate, we appreciate your new rating. We are grateful for your time and thoughtful evaluation of our work. | Summary: This paper demonstrates the fundamental reason for the domination of skip connections in DARTS from the new perspective of overfitting of operations in the supernet, using preliminary experiments, and proposed the operation-level early stopping method to mitigate this phenomenon by using the GM score metric during the searching.
Strengths: The idea of using GM metric to decide whether to update the OPs is broadly used in efficient-training papers. This paper proposes to use this metric to early stop the updating of specific OPs during the NAS procedure, based on their novel overfitting observations.
The comparison is intensive, showing the superiority of this proposed method, in terms of time cost, and accuracy.
Weaknesses: 1. The accuracy metric is one of the metrics to measure a NAS method. We also consider the Kendall rank correlation coefficient. Please compare with previous methods using this metric, because I am not sure whether the proposed early stopping mechanism will hurt the ranking or not. Usually, we say the ranking performance of a NAS method may be more important than the accuracy of the searched model.
2. About the overfitting threshold, "we determine the threshold by averaging the cosine similarity over 20 iterations for 30 randomly initiated architectures in each search space" Is the initial 20 iterations ok or enough for determining the threshold? The initial stages may have dramatic changes in gradient. Also, the threshold is fixed during the NAS procedure, shouldn't it be adaptive or scheduled?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, the authors adequately addressed the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful and insightful review. Next, we will answer your questions one by one, and we hope this will improve your acceptance of the paper.
**W1.**
Thanks for the valuable suggestion. As shown in Figure 3 in our attached PDF, we compare the Kendall rank correlation coefficients for OLES and DARTS. The results indicate that the early stopping mechanism does not hurt the ranking performance of DARTS. In the following table, we also conduct a comparison with other NAS methods, including RandomNAS, DARTS, GDAS, and SGAS. (RandomNAS and GDAS sample 4 excellent architectures in 4 rounds, and other methods randomly select 10 architectures from a single run.)
| RandomNAS$^*$ | DARTS | GDAS$^*$ | SGAS | OLES |
|---|---|---|---|---|
| 0.0909 | 0.13 | -0.1818 | 0.42 | 0.41 |
Notably, the Kendall coefficient of OLES closely aligns with SGAS. SGAS [1] aims to alleviate the effect of the degenerate search-retraining correlation problem. The results demonstrate that by mitigating operation parameter overfitting, differentiable NAS could focus on the potential of architectures themselves, thus enhancing the correlation between search metrics and the architectures discovered. We hope our answers address your concerns, and we will consider adding the comparison of the Kendall rank correlation coefficients into the revised version.
**W2.**
Thank you for pointing this out. Instead of relying on search or empirical knowledge to determine the early stopping threshold, we intend to employ an adaptive approach to determining this critical hyperparameter. It is important to clarify that our gradient matching (GM) score is dynamically computed by averaging over every 20 iterations throughout the entire training iterations, rather than being limited to the initial 20 iterations. We apologize for the confusion here. The selection of 30 randomly initiated architectures is carried out to initiate the experiment 30 times, enabling the identification of the optimal iteration number. As you pointed out, the initial stages may have dramatic changes in gradient values. Therefore, in our experiments, we compute the GM score throughout the entire training process to ensure the effectiveness of early stopping.
To elucidate the rationale behind selecting 20 iterations, we conduct experiments to assess various iterations along with their corresponding thresholds across different search spaces. As shown in the following table, the accuracy attained through GM score computation over every 20 iterations achieves the highest accuracy.
We have not yet found a better method to determine the threshold in a more adaptive or schedulable manner, and we will leave it for future work. We sincerely appreciate your valuable suggestions and we will make more clear introduction about the selection of the overfitting threshold in our revised version.
| Iterations |DARTS Search Space|| NAS-Bench-201 Search Space |||
|---|---|---|---|---|---|
|| CIFAR-10 | CIFAR-100 | CIFAR-10 | CIFAR-100 | ImageNet-16-120 |
| 10 | 97.33 | 80.83 | 80.57 | 47.93 | 26.29 |
| 20 | **97.70** | **82.70** | **93.89** | **70.75** | **44.38** |
| 30 | 97.05 | 82.39 | 91.93 | 66.97 | 38.87 |
| 40 | 97.19 | 81.32 | 89.76 | 64.69 | 32.73 |
[1] Snas: stochastic neural architecture search, ICLR 2019.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed feedback
Comment: Thank you for the detailed feedback, the rebuttal addressed my concerns. I would like to increase the score from borderline accept to weak accept.
---
Reply to Comment 1.1.1:
Title: Thank you for your thorough review and raising the score
Comment: Thank you for your thorough review and raising the score for our submission. Your valuable feedback guided us in refining our experimental approach and the suggestions guided us in providing more concise and coherent explanations.
Thank you for your time, expertise, and for acknowledging our commitment to enhancing the quality of our research. We look forward to further enriching our work based on your insightful input. | Summary: The paper focuses on the robustness issues in differentiable NAS, specifically the domination of skip connections. It first analyzes the issue from a novel perspective, proposing that the domination of skip connections arises due to the overfitting of operations in the supernet during training. Then, the paper proposes the operation-level early stopping method, which monitors each operation in the supernet and stops its training when it tends to overfit. The paper employs a gradient matching approach to detect overfitting, comparing the gradients' directions of operations on training and validation data. A significant deviation in direction is considered an indication of overfitting. The proposed OLES addresses the domination of skip connections with negligible additional overhead. Extensive experiments demonstrate the effectiveness of OLES on different datasets and search spaces.
Strengths: S1. The paper demonstrates good originality. It provides a comprehensive analysis of the issue of the domination of skip connections in the differentiable NAS by dopting a new perspective. Although straightforward, the perspective makes sense and is interesting. Specifically, the paper aims to explain the cause of this issue through the overfitting of operations in the supernet. The paper proposes the operation-level early stopping (OLES) method, which introduces gradient matching to address this matter. OLES elegantly and effectively resolves the domination of skip connections, incurring negligible additional overhead.
S2. The experiments are thorough and well-organized, containing experiments in different search spaces and an in-depth analysis of the proposed algorithm. The empirical results demonstrate that the proposed OLES achieves state-of-the-art performance on CIFAR. The availability of open-source codes further facilitates reproducibility.
S3. The presentation of ideas and algorithms is clear, while the references and background knowledge are comprehensive. The background knowledge and the issue to be solved are adequately introduced.
S4. The perspective and idea about the overfitting of operations have profound significance in uncovering the underlying causes of the domination of skip connections in DARTS. These novel perspectives, ideas, and algorithms contribute to a deep understanding of differentiable architecture search and may inspire future research in the field.
Weaknesses: W1. It needs a thorough explanation of the used gradient matching method. The authors should provide a more detailed introduction to gradient matching and clarify the differences from other methods.
W2. While the proposed method demonstrates significant improvements over the original DARTS in the experiments, it does not have a competitive advantage compared to other state-of-the-art methods.
W3. The paper specifically focuses on addressing the domination of skip connections through operator-level early stopping. The authors are suggested to discuss how the concept of operator-level early stopping can be applied to other scenarios beyond differentiable architecture search.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Q1. Are there any limitations to the methodology presented in this paper? Can it be applied to other differentiable NAS methods beyond the original DARTS?
Q2. Why does the proposed method not exhibit a competitive advantage over other state-of-the-art NAS methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful and insightful review. We are glad to receive your positive response and acknowledgment of our work. Next, we will answer your questions one by one, and we hope this will improve your acceptance of our paper.
**W1.**
Thanks for the valuable advice. Gradient matching aims to leverage the gradient information of parameters to assist in making decisions and selections. For example, GM-NAS utilizes gradient matching scores to make splitting decisions, determining whether a module should be shared among child architectures. The gradient matching scores are computed based on the gradient information of different child models on shared parameters. There are also methods that utilize GM for dataset condensation.
In our proposed OLES, we employ the GM score as an indicator for early stopping, preventing operation parameter overfitting.
The gradient matching scores, in our approach, are calculated using the gradient information of parameters on both training and validation data. We hope our answers address your concerns.
**W2 \& Q2.**
Thank you for pointing this out. Compared to other sophisticated NAS methods, we aim to solve the skip connection domination issue in differentiable NAS and uncover the fundamental causes of this issue. OLES offers a novel perspective on the origins of the skip connection domination and proposes simple yet effective solutions. It is essential to note that our approach requires only minimal modifications to the original DARTS, thus incurring negligible additional overhead.
Practitioners can easily enhance the performance of OLES by integrating it with other state-of-the-art differentiable NAS algorithms, conducting more fine-tuning, and leveraging advanced training tricks such as the SE module. We hope our answers address your concerns.
**W3 \& Q1.**
Many thanks for the suggestion. OLES can also be employed as a plugin module for other differentiable NAS methods. It is especially well-suited for scenarios where the NAS method alternately trains on the training and validation datasets. Depending on the specific differentiable NAS methods and search spaces, it may be necessary to adjust the early stopping threshold setting to achieve optimal performance.
---
Rebuttal Comment 1.1:
Title: Thank your for your response
Comment: Thank your for your response. I have no further questions.
---
Reply to Comment 1.1.1:
Title: Thanks very much for your feedback
Comment: Many thanks for your feedback. We are glad to hear that our response has addressed your concerns. Your evaluation and support on our work are greatly appreciated! We are committed to integrating clarifications you suggested into the forthcoming permitted revision. | Rebuttal 1:
Rebuttal: Dear AC and reviewers,
We would like to thank all the reviewers for their great efforts, insightful comments, and valuable suggestions, which are very helpful for us to further improve this paper. To the best of our efforts, we’ve diligently tried to address all the specific comments, including the minor ones that have been raised by each reviewer, by adding additional experiments and detailed analyses. Next, we would like to respond to the questions raised by the reviewers point by point, and we will incorporate these clarifications in the future version of our paper. We sincerely hope that our responses can address reviewers' concerns and translate into higher scores.
In the attached PDF, we present some additional experimental results to provide further clarification for addressing the concerns of the reviewers. In Figure 1, we compare the performance of our OLES against other methods when training for longer epochs. Similarly, Figure 2 displays the number of skip connections obtained by different methods. In Figure 3, we present the search-retraining Kendall coefficients of OLES and DARTS. Lastly, Figure 4 shows the results of the original paper's Figure 1(a) and Figure 1(b) under extended training iterations. We kindly recommend the esteemed reviewers refer to the content provided in the attached PDF while reviewing our rebuttal.
Here we provide general answers to similar questions mentioned by the reviewers.
**1. Distinctions between OLES and GM-NAS**
In reality, GM-NAS and our approach are entirely distinct lines of work, addressing completely different problems. They just both employ the GM Score as a mathematical tool. Specifically, GM-NAS falls within the field of Few-Shot NAS research. They argue that due to coupled optimization between child architectures caused by weight-sharing, One-Shot supernet’s performance estimation could be inaccurate, leading to degraded search results. As a result, they propose to reduce the level of weight-sharing by splitting the One-Shot supernet into multiple separated sub-supernets. GM-NAS utilizes gradient matching scores to make splitting decisions, determining whether a module should be shared among child architectures. The gradient matching scores are computed based on the gradient information of different child architectures on shared parameters.
In contrast, the proposed OLES aims to address the problem of skip connection donimation from a totally new perspective and employ the GM score as an indicator for early stopping, preventing operation parameter overfitting. The gradient matching scores, in our approach, are calculated using the gradient information of parameters on training and validation data.
Also, while GM-NAS offers performance enhancements to some extent, the time complexity of GM-NAS is larger, leading to higher search costs compared to OLES. As shown in Table 1 and Table 3, the search costs of GM-DARTS is 3 times and 6 times higher than OLES on CIFAR-10 and ImageNet, respectively. Due to only minimal modifications to the original DARTS, our approach incurs negligible additional overhead.
Furthermore, due to different lines of works, our approach and GM-NAS are orthogonal, enabling their combined usage to further enhance performance. We will try it in our future work.
Thanks again to all the reviewers for their constructive suggestions. We will do our best to improve our paper and address all concerns raised. We sincerely hope our answers can address the all the concerns reviewers have raised and improve the score of our paper.
Best regards,
Authors of Submission 11229
Pdf: /pdf/0cdf533980dad361bdea0748bf11206616fb12a7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary:
This paper propose a method namely operation-level early stopping to address the skip-connection domination issue in domain of differentiable architecture search (DARTS). Though this problem is heavily explored in the past, the authors believe that the key reason of skip-connection domination is because of they try to overfit the validation set used in DARTS based method, and their proposed operation-level early stop that conceptually stop the training of architecture parameter if the overfitting is observed.
Strengths: The problem of skip-connection dominance is a long standing problem in DARTS domain, and novel method to address this has a clear motivation.
The hypothesis of overfitting is the root cause of skip-connection domination is novel and interesting.
Analysis of architecture with validation loss is quite clear and justify their hypothesis.
I appreciate the extensive and honest experiments on all kinds of settings, even though many of the results does not surpass the state-of-the-art.
Weaknesses:
I have several questions regarding this paper and hope to hear back from the authors.
1. Utilizing gradient matching as an indicator to perform early stop seems okay, but this paper lacks of sufficient analysis of what is the key difference between GM+DARTS and their approach, especially regarding why OLES is better. In the related work section, it only describes GM-NAS introduces gradient match score into NAS literature, but this is not enough to let readers understand the difference.
2. In essence, use early stop to avoid overfitting in DARTS is novel, but experiments seem to show that this OLES does not surpass previous DARTS+PT, which is another indicator to select DARTS operation. I do not understand the urgency to accept another indicator work with similar performance. In addition, similar as above, I do not see much comparison of how OLES surpass DARTS+PT. I agree that OLES can surpass original DARTS, but if it cannot surpass other methods that aims to address the skip-connection domination issue, this is inadequate to appear in a top-tier conference in my humble opinion.
3. In general, the experiments compared to state-of-the-art is inferior.
To the best of my knowledge, this approach has a close relationship with GM+DARTS and DARTS+PT, where in all experiments, the authors should compare to. However, for example, in Table 2, none of these methods exists. In addition, when solely compared to GM+DARTS, in Table 1, it is 0.05 test error better than GM+DARTS. In table 3, GM+DARTS is basically identical as OLES (24.5 v.s. 24.5). In Table 4, OLES surpass GM+DARTS by a margin of 0.2, which is kind of significant. However, in Table 5, GM + ProxylessNAS surpasses OLES again. Why we should accept a paper using the exact gradient matching score in a different manner, which seems inferior than the original approach?
In addition, I would like to see how this OLES address skip-connection domination with other approaches that aim the same target, not in terms of their final performance, but with respect to their performance to address skip-connection domination issue. After all, this is not the first paper aiming to address it.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Same as above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Moderately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful review, which is very helpful for us to further improve this paper. Next, we will answer your questions one by one, and we hope this will help to address your concerns and improve your view of the paper.
**W1.**
Thank you for pointing this out. We apologize for not providing a detailed explanation for GM-NAS in the related work due to space constraints. Although, as you mentioned, both methods employ the GM Score as a mathematical tool, GM-NAS and our approach are entirely distinct lines of works. Please refer to the global rebuttal for further detail.
We hope our answers address your concerns. Thanks again for the comment, and we will consider adding the differences to our revised version.
**W2.**
Thanks for the comment. Actually, our approach almost consistently outperforms DARTS-PT and other NAS methods that address the skip connection domination issue. For your convenience, we highlight the comparison between our approach and other NAS methods in the following table. And in most cases, our method remains competitive compared to the SOTA results.
| NAS method | CIFAR-10 | CIFAR-100 (Transfer) | CIFAR-100 | ImageNet|
|---|---|---|---|---|
| P-DARTS | 2.50 | 16.55 | 17.46 | 24.4 |
| R-DARTS(L2) | 2.95 $\pm$ 0.21 | - | 18.24 | - |
| FairDARTS | 2.54 | - | - | 24.9 |
| DARTS- | 2.59 $\pm$ 0.08 | - | 17.51 $\pm$ 0.25 | 23.8 |
| $\beta$-DARTS | 2.53 $\pm$ 0.08 | 16.24 $\pm$ 0.22 | 17.33 | 23.9 |
| DARTS+PT | 2.48(2.61 $\pm$ 0.08) | 19.05 | 18.78 | 25.5 |
| **OLES** | **2.30(2.41 $\pm$ 0.11)** | 16.30(16.35 $\pm$ 0.05) | **17.30** | 24.5 |
Despite existing efforts to explore the origins of the skip connection domination issue and the corresponding solutions, it remains an open problem. These methods have put forth their hypotheses but have not fully resolved the issue. Compared to existing methods, the main contribution of our approach is that we offer a novel perspective on the origins of the skip connections domination and propose an effective solution. It is essential to note that our approach is remarkably simple, involving only minimal modifications to the original DARTS, thus incurring negligible additional overhead. We believe that the current performance of OLES is sufficient to validate our hypothesis and demonstrate that our method effectively resolves the skip connection domination issue in differentiable NAS.
To further demonstrate the effectiveness of addressing the skip connection domination issue, we increase the number of iterations of supernet training. As depicted in Figures 1&2 in our attached PDF, DARTS-PT fails to solve the skip connection domination issue, indicating that their hypotheses do not entirely reveal the fundamental causes of this problem, or rather, they merely delay the occurrence of skip connection domination.
Therefore, our work is meaningful as we offer a novel perspective and idea that not only proves effective but is also simpler and more straightforward to comprehend than previous perspectives. Instead of introducing yet another indicator, we aspire to stimulate further exploration in this field based on our insightful observations regarding the overfitting phenomenon. We believe this can lead to a deeper understanding of the differentiable architecture search process. We hope our answers address your concerns, and we will highlight the differences between our approach and other methods that focus on the skip-connection domination issue.
**W3.**
Thank you for pointing this out. For both GM-NAS and DARTS-PT, the results on CFIAR-100 are not found in their original papers. For more comprehensive comparisons, we reproduced the results of these two methods on CFIAR-100 using their open-source codes, and the results are also displayed in the table below.
||DARTS+PT|GM+DARTS|OLES|
|---|---|---|---|
| CIFAR-100 (Transfer) | 19.05 | 16.45 | **16.30** |
| CIFAR-100 |18.78 | 17.42 | **17.30** |
It can be observed that the proposed OLES performs better than GM+DARTS and DARTS+PT on CIFAR-100. As previously stated, GM-NAS is a much more complex method that falls in a distinct line of research. Due to its few-shot nature, it involves higher time and space complexities. Moreover, GM-NAS is orthogonal to our approach. OLES focuses on addressing the skip connections domination issue in differentiable NAS. We aim to uncover the fundamental causes of this issue and require only minimal modifications to the original DARTS, incurring negligible additional overhead. Despite this, our method remains highly competitive compared to GM-NAS. Furthermore, our main contribution is the insightful observation and hypothesis regarding overfitting, as well as the proposal of the early stopping method. The GM score, in this context, serves merely as an indicator for early stopping.
To demonstrate the effectiveness of our approach in different search spaces, we also perform OLES on the MobileNet search space. As it is not naturally designed for DARTS, we had to make several modifications (ProxylessNAS has adapted to this search space, and FairDARTS did not open-source their search process on MobileNet). Therefore, we believe there is still room for optimization in the results presented. Nevertheless, OLES achieved highly competitive performance in the MobileNet search space.
**W4.**
Thanks for the suggestion. Besides the final performance, we display the number of skip connections in the searched architecture as the number of training iterations increases in Figure 2 of our attached PDF. It is evident that DARTS exhibits a severe skip connection domination issue, while OLES effectively resolves this problem. When training with an extended number of training epochs. Methods like DARTS-PT tend to have an increasing trend of skip connections, whereas OLES remains stable, which further verifies the insightful observations on the fundamental causes of the skip connection domination issue and the effectiveness of the proposed solution. | null | null | null | null | null | null |
SNEkhorn: Dimension Reduction with Symmetric Entropic Affinities | Accept (poster) | Summary: This paper presents a new dimensionality reduction algorithm, named SNEkhorn. By uncovering the novel links between Entropic affinities (EAs) and Optimal Transport (OT), the authors derive EAs with symmetric doubly stochastic normalization and the fixed row-wise entropy, which is the key to SNEkhorn. Besides, the authors show a dual ascent algorithm to compute this new affinity efficiently. Beyond this theoretical contribution, the authors showcase the benefits of SNEkhorn through numerical experiments on simulated data and real data.
Strengths: - Entropic affinities (EAs) needs to be symmetrized when used in popular Dimensionality Reduction (DR) algorithm t-SNE, but the symmetrization destroys the row-wise constant entropy and stochasticity properties of EAs. To derive EAs that can maintain both symmetry and its own properties is a very natural. The proposed method is novel and simple to describe.
- The authors discover novel connections between EAs and OT, and show a dual ascent algorithm to compute EAs from OT. The proposed algorithm provides a new perspective on EAs.
- For conventional DR algorithm, the latent affinity is not doubly stochastic, which imposes spherical constraints on the embedding space. The proposed methods avoid this problem.
- The simulation study and real data analysis are very sufficient. The authors illustrate the effectiveness from many aspects.
Weaknesses: - There is not any discussion about the computational complexity of proposed method. Although the proposed algorithm performs better than t-SNE and UMAP, how does the computational complexity of proposed method compare with t-SNE or UMAP. Since the computational cost is also important in practice.
- Some of the formulas and references in the article are incorrectly hyperlinked.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - How does the computational complexity for the proposed method compare to t-SNE? Does the proposed method spend the same amount of time as t-SNE while achieve better results?
- How to choose the dimensionality of PCA? In the reference the authors mentioned, the dimensionality of PCA is usually set to be 30 instead of 50. Does the dimensionality affects the performance of the algorithm?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the careful reading of the manuscript, her/his assessment and relevant remarks.
> How does the computational complexity for the proposed method compare to t-SNE? Does the proposed method spend the same amount of time as t-SNE while achieve better results?
This is a very good point. We provide some additional insights about computational aspects in the general answers as well as some runtime experiments in the pdf. Although both methods have a complexity that scales quadratically with the number of samples, one can notice in these experiments that the current implementation of t-SNEkhorn is slower than t-SNE. We will add a discussion and computational times in the final version of the paper.
Importantly, note that in our paper we did not use the most recent acceleration procedures that have been proposed for t-SNE [2,3] but our approach will directly benefit from many of those computational tricks.
> How to choose the dimensionality of PCA? In the reference the authors mentioned, the dimensionality of PCA is usually set to be 30 instead of 50. Does the dimensionality affects the performance of the algorithm?
This point indeed deserves interest as PCA is often applied as a pre-processing step before computing the distance matrix. Choosing the dimensionality of this PCA step seems to depend on the nature of the data (see e.g. [4] for scRNA data that recommends using 50 axes). In his t-SNE website (unlike in the paper as the reviewer rightfully pointed out), Laurens Van der Maaten also recommends using 50 axes. In our experiments, as shown for instance in table 2 of the pdf above provided with the global response, we did not find any significant difference between using 30 axes and 50 axes (although this might not be true for all datasets).
[2] Laurens Van Der Maaten. Barnes-hut-sne. arXiv preprint arXiv:1301.3342, 2013.
[3] Linderman, G. C., Rachh, M., Hoskins, J. G., Steinerberger, S. and Kluger, Y. (2019). Fast interpolation-based t-SNE for improved vi- sualization of single-cell RNA-seq data. Nature Methods 16 243-245.
[4] Dmitry Kobak and Philipp Berens. The art of using t-SNE for single-cell transcriptomics.
Nature Communication, 10:5416, 2019. | Summary: This is a very interesting paper about an application of the Sinkhorn algorithm to symmetrize the matrix of entropic affinities in methods of DR like SNE, t-SNE, etc.
Strong theoretical contribution.
Some experiments to illustrate.
Strengths: The paper is very interesting for its vision, state of the art across multiple domains, strong theoretical contribution, revisiting of existing methods and proposal of a new method.
The paper is very well written, not rushed.
The symmetrization with Sinkhorn is well done and this a nice "tour de force" like only Brittons can do.
The experiments include a grid search on the perplexity, and multiple runs for each perplexity.
Weaknesses: The paper is well written but it is also very/too dense: much information is packed, that is nice, but it takes quite some time to digest and to decode.
The notation is sometimes unconventional and difficult to follow, probably to keep it as compact as possible.
While the theoretical part is very convincing and interesting for itself, it will only appeal to a very restricted audience of people specialised in the methodological design of advanced methods of DR. Also, the experiments are not sufficient to convince practitioners that the Sinkhorn symmetrization is from now on something necessary to get the best results. The experiments lack broader comparisons with other methods that are in the state of the art (here it looks like the authors have reimplemented t-SNE and UMAP; also, there are methods that do not look for symmetry at all, although this can be detrimental in the presence of many isolated outliers). The quantitative assessment might be partly questionable: the trustworthiness is not the best DR QA indicator; what was the neighbourhood size, by the way?
The lack of accelerated version is a possible weakness for dissemination, while NeurIPS would be precisely the ideal launching pad.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The "crowding effect" is just an intuitive explanation found in LvdM t-SNE paper; would it not be better to refer to something more conventional (empty space phenomenon, norm concentration, or just the reduction in volume from HD to LD?)?
In Figure 1, SNEkhorn or t-SNEkhorn ?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No possible negative societal impact here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for his/her careful reading of the manuscript and insightful comments.
> The quantitative assessment might be partly questionable: the trustworthiness is not the best DR QA indicator; what was the neighbourhood size, by the way?
This is an interesting point. We wanted to fairly evaluate performances of the DR methods methods and to avoid ad-hoc performance measures that are used in the optimization problem of specific methods. This is why we chose classical (if a bit unorthodox) performance measures that evaluate the preservation of global (silhouette score) and local (trustworthiness) structures from the data. We are interested if the reviewer suggests an alternative that we might add in supplementary. For trustworthiness, the neighbourhood size was set to the default value of 5.
> The "crowding effect" is just an intuitive explanation found in LvdM t-SNE paper; would it not be better to refer to something more conventional (empty space phenomenon, norm concentration, or just the reduction in volume from HD to LD?)? In Figure 1, SNEkhorn or t-SNEkhorn ?
The reviewer is raising a good point. We mentionned the "crowding effect" to be faithful to the story of t-SNE and explain how heavy-tailed kernel first appeared in these methods. We agree indeed that a more conventional explanation will be better and will modify our article accordingly for the sake of clarity.
For figure 1, we indeed used SNEkhorn in the embedding space as we found that it gave descent results on the COIL dataset and we lacked space to provide both SNEkhorn and t-SNEkhorn embeddings. We will plot both methods in the supplementary in the final version of the paper.
Note that quantitative results for t-SNE and t-SNEkhorn on COIL are provided in table 3. | Summary: The paper presents a novel approach to dealing with entropic affinities (EAs) used in machine learning for dimensionality reduction tasks, specifically in the popular t-SNE algorithm. It addresses the limitations of current symmetrization methods applied to EAs, which can compromise the entropy and stochasticity properties of the affinity matrix. The proposed method uses optimal transport to achieve a natural symmetrization, leading to a new affinity matrix. This new matrix is then leveraged in a new dimensionality reduction algorithm called SNEkhorn, which is demonstrated to outperform state-of-the-art methods on both synthetic and real-world datasets.
Strengths: 1. The paper introduces a novel approach to symmetrizing entropic affinities by formulating them as an optimal transport problem. This brings a fresh perspective to the problem and could open up new avenues for research and application in machine learning.
2. The proposed symmetrization method claims to maintain the constant entropy and stochasticity properties of the affinity matrix while being computationally efficient through dual ascent. This combination of robustness and efficiency is essential for practical applications.
3. The development of the SNEkhorn algorithm based on the new affinity matrix offers a practical implementation of the proposed approach. The demonstrated superiority of SNEkhorn over other baseline methods on various datasets further strengthens the paper's claims.
4. The paper evaluates the proposed SNEkhorn algorithm on both synthetic and real-world datasets, providing a comprehensive assessment of its performance and general applicability.
Weaknesses: Clarity of Presentation: While the abstract provides a high-level overview, some concepts, such as entropic affinities, optimal transport, and dual ascent, might be challenging for readers not already familiar with the domain. The paper should provide a clear and concise introduction to these concepts, ideally with intuitive explanations and illustrative examples. For example, what is P and C on paper 3 "Symmetric Entropy-Constrained Optimal Transport"
Robustness Analysis: Since the proposed method claims to be robust to varying noise levels, it would be valuable to include a thorough analysis of its performance under different levels of noise in the datasets instead of two level. This could strengthen the claim of robustness and highlight the algorithm's practicality in real-world scenarios.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. What sensitivity analysis was performed on the hyperparameters introduced by the new symmetrization approach? How robust is the SNEkhorn algorithm to changes in these parameters?
2. Can you provide additional insights into the interpretability of the reduced representations obtained by SNEkhorn? How well does it preserve meaningful structure in the data compared to other methods? Is the visualization in Figure 6 a special case?
3.Could you elaborate on the limitations or potential failure cases of the proposed approach? What are the scenarios where SNEkhorn might not perform as well or could face challenges?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: While the paper proposes a novel approach to symmetrizing entropic affinities and introduces the SNEkhorn algorithm with promising results, it is essential to consider the potential limitations and shortcomings of the work. Some possible limitations include:
Scalability: Dimensionality reduction algorithms often face challenges with scalability when dealing with large datasets. The paper should address the computational efficiency and scalability of the proposed SNEkhorn algorithm, especially when applied to high-dimensional and massive datasets commonly encountered in real-world applications.
Parameter Sensitivity: It is essential to investigate the sensitivity of hyper parameters and assess how they impact the performance of the SNEkhorn algorithm, e.g. perplexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the appreciation of our work, insightful comments and questions.
### Answers to weaknesses:
> Clarity of Presentation: While the abstract provides a high-level overview, some concepts, such as entropic affinities, optimal transport, and dual ascent, might be challenging for readers not already familiar with the domain. The paper should provide a clear and concise introduction to these concepts, ideally with intuitive explanations and illustrative examples. For example, what is P and C on paper 3 "Symmetric Entropy-Constrained Optimal Transport"
We will add details about these concepts in the paper as it may indeed appear challenging for some readers. Note that in our article C is the transportation cost matrix (C_{ij} is the cost of transporting a unit of mass from sample i to sample j) while P is the transport plan (P_{ij} is the amount of mass transported from sample i to sample j) that we propose to use as affinity matrix. We will make it clearer in the revised version.
> Robustness Analysis: Since the proposed method claims to be robust to varying noise levels, it would be valuable to include a thorough analysis of its performance under different levels of noise in the datasets instead of two level. This could strengthen the claim of robustness and highlight the algorithm's practicality in real-world scenarios.
This is a very interesting question. We provide some new results studying the robustness to noise in the global reply (Figure 1 in the pdf).
### Answers to questions
> What sensitivity analysis was performed on the hyperparameters introduced by the new symmetrization approach? How robust is the SNEkhorn algorithm to changes in these parameters?
We provide a sensitivity analysis in Fig. 5 in the paper for spectral clustering applications. We also added in the global reply (see Figure 2 in the adjoining PDF) a new sensitivity analysis for the end task of dimensionality reduction. This shows the superiority of t-SNEkhorn over t-SNE over a wide range of perplexity values.
> Can you provide additional insights into the interpretability of the reduced representations obtained by SNEkhorn? How well does it preserve meaningful structure in the data compared to other methods? Is the visualization in Figure 6 a special case?
Similarly to SNE (or UMAP), SNEkhorn's axis cannot be directly interpreted due to the non-linearity of the model.
As shown in section 5, using symmetric entropic affinities and SNEkhorn lead to better robustness to heteroscedastic noise as well as enhanced clustering abilities compared to previous approaches. As such, the partitioning structure as well as close range pairwise relations are better preserved in SNEkhorn.
We displayed figure 1 and 6 to give a visual interpretation of the figures given in table 3. It is indeed an example where SNEkhorn performs much better than other methods.
> 3.Could you elaborate on the limitations or potential failure cases of the proposed approach? What are the scenarios where SNEkhorn might not perform as well or could face challenges?
First note that while our proposed symmetric EAs are much more robust than the l2 symmetrization carried out in t-SNE, we will share many failure cases with tSNE, i.e. when the perplexity is too large (loss of local subtleties) or too small (fake structures appearing). In addition, our method comes with a more involved computational cost since we need to optimize twice the number of parameters (vectors $\boldsymbol\gamma$ and $\boldsymbol\lambda$) for affinity matrix estimation (instead of only $\boldsymbol\epsilon$ for the directed EAs). For runtime values, we refer to table 1 of the pdf in the global answer above. | Summary: Existing DR approaches employ EAs after heuristic symmetrisation (symmetric-SNE). This leads to less faithful embeddings with low silhouette scores. This paper avoids such heuristic symmetrisation by enforcing symmetrisation in a related, new OT based formulation.
Towards this goal, firstly, the EA problem is equivalently written as a (semi-relaxed) OT problem, where the transport plan recovers the EAs. Then symmetrisation is explicitly enforced as constraints on the transport map in this OT problem. Using optimality conditions it is shown that such EAs, maintain the crucial row-wise entropic equalities (prop4). As a result EAs are symmetric, doubly stochastic, and satisfy the entropic equalities, making the organically superior to the existing heuristics.
Using the proposed EAs, a DR formulation is proposed (leading to so-called SNEkhorn). Details of solving this problem are presented. The proposed embeddings are empirically compared to the state-of-the-art wrt. spectral clustering and dimension reduction. The improvements are significant.
Strengths: 1. The organisation and write-up are well polished and makes it an easy read.
2. Connections made between EAs and OT are interesting, especially in the light of the symmetrisation issue with existing DR approach.
3. Empirical improvements over baselines are impressive.
Weaknesses: ..
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It seems a primary reason for having Q_Z^{ds} in SNEkhorn is to enable fast objective-gradient computation via sinkhorn. Is this true?
2. Will it be more appropriate to consider other affinities like \tilde{Q}_Z etc. in place of Q_Z^{ds} ? If \tilde{Q}_Z leads to spherical embeddings, can there be applications where this is desirable? Any discussion on this and alternatives may help the reader.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations were discussed in the concluding section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for her/his insightful comments and questions. Our answers to the questions raised can be found below.
> It seems a primary reason for having Q_Z^{ds} in SNEkhorn is to enable fast objective-gradient computation via sinkhorn. Is this true?
It is true that it enables fast objective-gradient computation via the Sinkhorn algorithm. However the primary reason for using such Q_Z^{ds} is to counter the sphere concentration phenomenon for the embedding occuring when matching a non DS affinity (for instance \tilde{Q}_Z or Q_Z) with a DS affinity for input data [1]. This geometrical effect disapears when the embeddings affinity is itself DS. This is very useful as soon as one wants to embed onto a flat space.
> Will it be more appropriate to consider other affinities like \tilde{Q}_Z etc. in place of Q_Z^{ds} ? If \tilde{Q}_Z leads to spherical embeddings, can there be applications where this is desirable? Any discussion on this and alternatives may help the reader.
Thank you for this very interesting comment. Spherical embeddings are desirable if the embedding space is indeed a sphere. This is what is done in [1]. In our work, we rather focused on Euclidean flat spaces as it is what practionners are usually interested in when performing dimensionality reduction (for instance with scRNA seq data). Note that this also suggests as future works to study SNEkhorn for embedding on spherical manifolds. This discussion will be added in the paper.
[1] Yao Lu, Jukka Corander, and Zhirong Yang. Doubly stochastic neighbor embedding on spheres. Pattern Recognition Letters, 128:100–106, 2019.
---
Rebuttal 2:
Comment: Thanks for the reply. I would like to keep my score after reading the rebuttal and going through other reviews. | Rebuttal 1:
Rebuttal: We first would like to thank all the reviewers for their remarks and questions.
You may find attached a pdf with some new results to answer the various points raised. The new results are as follows.
## New results
### Figure 1 : Robustness to noise
In Figure 1, we focus on reviewer s51e's question about the robustness analysis. We plot spectral clustering scores on the clustering of three Gaussian clusters with variance $\sigma$, $2\sigma$ and $3\sigma$. This figure shows that symmetric entropic affinities are more robust than the Sinkhorn kernel when $\sigma$ increases. Note that the later is known for its robustness to heteroscedastic noise as shown in [5].
### Figure 2 : Sensitivity analysis
In figure 2, we extend the sensitivity analysis perfomed for spectral clustering (Figure 5 in the submitted paper) to DR scores. This figure shows that tSNEkhorn outperforms tSNE on a wide range of perplexity values. We hope this partly answers reviewer s51e's request about sensitivity analysis.
### Table 1 : Runtime experiments
In table 1, as requested by reviewer 8Btp, we display the runtimes of tSNE and tSNEkhorn on a few datasets. Even though both algorithms have a quadratic complexity with respect to the number of samples, tSNEkhorn's optimization is a bit more involved and requires more time than tSNE.
### Table 2 : Sensibility to PCA dimensions
Finally, to answer the question of reviewer 8Btp, we test different values of dimensions for the pre-processing PCA step. We find that the performance of both tSNE and tSNEkhorn, measured by the silhouette score on COIL/Olivetti/UMNIST, is stable accross these values.
[5] Boris Landa, Ronald R Coifman, and Yuval Kluger. Doubly stochastic normalization of the
gaussian kernel is robust to heteroskedastic noise. SIAM journal on mathematics of data science, 3(1):388–413, 2021.
Pdf: /pdf/1739da32d4c567cbb871c24f106c7736606fbba2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Private Everlasting Prediction | Accept (oral) | Summary: This work proposes the notion of private everlasting prediction. Given a training dataset, the predictor responds to a sequence of queries and privacy has to be preserved for both the training data and all queries. The authors explore the PAC learnability problem under this model and show that the sample complexity scales quadratically with the VC dimension through a generic construction from non-private learners.
Strengths: 1. The authors formulate a new notion of private everlasting prediction. This is an original theoretical concept which extends the single query prediction model and has practical relevance.
2. The authors prove several interesting theoretical properties for private everlasting prediction. First it requires that the hypothesis needs to change over time. Second, the authors show that the sample complexity scales quadratically with the VC dimension of the concept class. This is a significant improvement compared to private learning which is impossible over infinite domains.
3. The writing is clear and easy to follow.
Update: increased my score to seven after seeing the authors' response and other reviewer comments.
Weaknesses: 1. In the algorithm GenericBBL, $\tau>1.1\times 10^10$ seems too large. Can the constant be made any smaller?
2. The algorithm is not computationally efficient.
3. In terms of writing, the authors could provide a brief overview of the proof ideas for Theorem 5.1.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In practice, some large models are pretrained on public data which can be considered non-private. Theoretically, is it relevant to consider the case when the training set $S$ is non-private and the stream of queries is private? How would the sample complexity change?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. Below we address the points you make:
**> In the algorithm GenericBBL, $\tau>1.1\times 10^{10}$, seems too large. Can the constant be made any smaller?**
We have not optimized constants as our contribution focuses on asymptotic complexity. The paper introduces a new concept - private everlasting prediction - and demonstrates an unbounded asymptotic improvement over private learning. While the construction may be impractical at this stage, we believe that it provides a significant theoretical improvement that would be followed by more theoretical and practical research towards making everlasting predictors usable in a variety of practical applications.
**> The algorithm is not computationally efficient**
Although our algorithm is not computationally efficient it does yield in some cases polynomial time constructions. We clearly state the question of making the construction efficient as an open problem. We expect that future work would resolve the question whether an efficient generic construction exists.
**> In terms of writing, the authors could provide a brief overview of the proof ideas for Theorem 5.1.**
We will include an overview of the proof. Very briefly - the high-level idea is to use LabelBoost to label the unlabeled dataset and use BetweenThreshold to predict labels. The privacy and accuracy guarantees come from these two algorithms.
**> In practice, some large models are pretrained on public data which can be considered non-private. Theoretically, is it relevant to consider the case when the training set S is non-private and the stream of queries is private? How would the sample complexity change?**
Variations of this question were studied in the standard private learning model (where the learner releases a model), e.g., by [1,2]. We believe that this could also be relevant in our context, but this would require adjustment to the learning scenario to be meaningful. More specifically, in our context, if pre-training can be performed on O(VC) non-private labeled examples, then that would nullify the need for private learning/prediction, as we could simply release the resulting non-private model.
We believe that this would make sense in other learning scenarios where you would like to use the private queries in order to improve the error of the non-private model you obtained from the initial (non-private) training set. Our privacy definition could potentially fit such scenarios.
[1] Beimel, Nissim, Stemmer. Private Learning and Sanitization: Pure vs. Approximate Differential Privacy. RANDOM 2013.\
[2] Bassily, Moran, Alon. Limits of Private Learning with Access to Public Data. NeurIPS 2019
---
Rebuttal Comment 1.1:
Title: Thank you for your resonse
Comment: Thank you for your response. I believe the new concept is very meaningful and could inspire a new line of research. I will increase my score to 7. | Summary: The paper discusses private everlasting prediction, which extends private prediction to answer an unlimited sequence of prediction queries. The goal is to present a generic private everlasting predictor with low training sample complexity. The paper introduces definitions for everlasting prediction and everlasting differentially private prediction interfaces. It then presents a generic construction called GenericBBL for private everlasting prediction.
Strengths: 1. This paper introduces a formal framework for everlasting prediction.
2. This paper provided a comprehensive privacy analysis to the proposed privacy-preserving everlasting prediction task.
Weaknesses: 1. Limited applicability: The paper focuses on the theoretical aspects of private everlasting prediction and does not provide concrete practical applications or empirical evaluations. It remains to be seen how well the proposed approach translates into real-world scenarios.
2. Lack of comparative experimental analysis: The excerpt does not mention any comparison or benchmarking against existing methods or alternative approaches. Without such comparisons, it is difficult to assess the novelty or superiority of the proposed method.
3. Lack of empirical validation: The excerpt does not mention any empirical experiments, simulations, or case studies to validate the effectiveness or robustness of the proposed private everlasting predictor. It is unclear how the proposed construction performs in terms of privacy preservation, utility, and sample complexity compared to other techniques.
4. The contents shown in the supplementary materials contain a lot of redundant information compared with the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: What is the application scenario of the everlasting prediction?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: It lacks experimental analysis of the proposed algorithm, which makes it hard to justify the utility of the proposed algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. Below we address the points you make:
**> Limited applicability: The paper focuses on the theoretical aspects of private everlasting prediction and does not provide concrete practical applications or empirical evaluations**
We agree that the construction of efficient practical algorithms that can be used in concrete practical applications is important. Establishing a sound theory of private learning and improving the asymptotic behavior of private learning algorithms is no less important. In fact many if not most of the practical applications of differential privacy would not have existed hadn't they began as concepts which were introduced in theoretical results, even if initial constructions were impractical.
**> Lack of comparative experimental analysis**
We present a novel concept (everlasting prediction) as an alternative to private learning. We also present a construction demonstrating that everlasting prediction is possible in tasks where private learning was proved to be impossible. For example, the sample complexity for private learning of threshold functions grows with the domain size. In contrast, as we mention in the paper, threshold functions can be predicted efficiently, regardless of the domain size.
**> Lack of empirical validation... makes it hard to justify the utility of the proposed algorithm**
At this phase of this research, an experimental analysis and an empirical validation would not yield meaningful conclusions beyond deeming the current algorithms "impractical". Rather, a mathematical asymptotic analysis is the right tool for analyzing our results. We believe that our work would lead to future studies on this topic, both theoretically and practically oriented.
**> The contents shown in the supplementary materials contain a lot of redundant information compared with the paper**
The supplementary materials contain all the formal details which were omitted from the main paper. We are open to reorganizing the paper (within the page limit).
**> What is the application scenario of the everlasting prediction?**
Essentially, private everlasting predictors can be used in many scenarios where we would like to use the outcome of private learning algorithms for prediction, e.g.,
1. A hospital might use a private everlasting predictor in supporting decisions whether patients need to be treated for COVID19, based on their tests and medical history. The process would bootstrap with an initial sample of patients that would be labeled by experts and continue with private prediction. Due to the very sensitive nature of the data (as well as legal and ethical considerations) it is important to protect the information of both the initial sample and of patients to whom prediction is applied.
2. Similarly, a bank might use a private everlasting prediction as part of its decision process whether to offer loans to customers.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for the explanation! It addresses my concerns and I agree that the new concept is very meaningful. I would like to increase my score to a 7. | Summary: This paper provides an intriguing path to evading known lower bounds for differentially private PAC learning. Whereas nonprivately the sample complexity of learning is proportional to the VC dimension, the private sample complexity for (pure) DP is characterized by the representation dimension, which can be much larger. In particular, some natural classes such as threshold functions over an infinite domain (e.g. Z or R) have finite VC dimension but infinite representation dimension, so they can be learned nonprivately but not privately (even with approximate DP, by a separate result).
This paper shows that this impossibility only holds for privately releasing a hypothesis and not for privately classifying samples. In particular, it shows that in the online prediction setting, for any hypothesis class there is a generic stateful algorithm with sample complexity based on the VC dimension (to be precise, quadratic in the VC dimension) that can privately answer an unbounded sequence of iid queries with the same accuracy guarantee as PAC learning, but where the state of the algorithm remains hidden and only the query labels are revealed. The algorithm is allowed to remember previous queries but must be differentially private with respect to them as well as the points of the original (labeled) training set. It also shows that statefulness is necessary to achieve this result.
For general hypothesis classes the algorithm is inefficient, but it can be made efficient on important special cases, including threshold functions.
Strengths: This is a very interesting result that enhances our understanding of differentially private learning by bypassing known lower bounds. It essentially gives a separation between interactive and non-interactive private prediction, showing that interactivity reduces the private sample complexity to nearly that of nonprivate learning (as long as we only have to output predictions and not a hypothesis). The paper is quite well-written.
Weaknesses: There are two obvious limitations of the main result, that yield very interesting open problems posed in the paper: whether it's possible to come up with a similar algorithm that is computationally efficient for all hypothesis classes, and whether it's possible to improve the sample complexity from VC^2 to VC.
Minor corrections:
104: points --> point
147: "upto" should be "up to"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) Is it strictly necessary for accuracy that the query points are iid? Or could we hope for the following stronger statement:
Given query points x_1, x_2, ... and a subset of indices I (not known to the algorithm) such that x_i is sampled from the distribution for i\in I but is chosen by the adversary for i\notin I, we require accuracy only for the points in I.
That is, as long as the challenge queries are sampled from the distribution, we want to be robust to data poisoning using the other queries. Does this seem like it could be possible, or is it too strong?
2) Minor clarification: In the technical overview on page 3-4 it sounds like the parameters discussed are based on the standard composition bound, without using BetweenThresholds. It might be helpful to clarify whether this is the case and if we get a similar or only slightly weaker result without using BetweenThreshold, or if we truly need the tighter privacy analysis of BetweenThreshold.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. Below we address the points you make:
**> There are two obvious limitations of the main result that yield very interesting open problems… computational efficiency and sample complexity**
We believe that - even with these questions unresolved - the concept of everlasting prediction is of interest to the private learning community, in particular because it gets around impossibility results. We hope that follow up work will resolve the open questions we present in the paper.
**> Is it strictly necessary for accuracy that the query points are iid? ... Does this seem like it could be possible, or is it too strong?**
We believe that a construction similar to ours should work in a setting where not too many of the challenge queries are chosen adversarially, provided that the given non-private learners can withstand a poisoning attack with related parameters. We view this research direction (and in particular the stronger formulation you suggested) as an interesting direction for future work.
**> Minor clarification ... can we get a similar or only slightly weaker result without using BetweenThreshold**
We can get a similar (but weaker) result even without BetweenThresholds. Specifically, this will yield an algorithm in which the initial labeled sample S has a worse dependency on the accuracy parameter alpha. | Summary: This work studies differentially private prediction. It has two major contributions:
a) Prediction corresponds to being given an initial labeled training set, and then subsequently making predictions on other data points based on it. This paper shows that differentially private prediction can be performed on an unbounded number of queries, with strong accuracy guarantees, with the initial training set sample complexity scaling only as a polynomial in the VC dimension of the hypothesis class. Prior work either studied prediction for a small number of queries or studied the more stringent task of private PAC learning where significantly stronger lower bounds are known (PAC learning asks for the release of an entire model as opposed to predictions alone).
b) Since queries correspond to data points, they are usually sensitive user information. The authors formalize a model of privacy (similar to joint differential privacy) that gives a meaningful notion of privacy for these data points, even when adversaries can choose the queries adaptively. Their algorithms satisfy this notion of privacy.
Strengths: This paper makes a valuable and somewhat surprising discovery by proving that differentially private prediction can be performed on an unbounded number of queries, with no training set sample complexity dependence on the number of queries! They leverage standard techniques such as sparse vector in clever ways to do so. They operate in rounds, and use queries themselves as data points for future rounds, calling upon techniques from the semi-supervised differentially private learning literature . Their results suggest new ways to get around lower bounds for differential privacy.
Their algorithms involve reducing private prediction to non-private PAC learning, and while their techniques are not yet practical in many cases (because of time complexity), their reduction can be readily extended to any non-private learning algorithm, and hence there is lots of potential for future work to come up with ways to make their techniques more practical.
Weaknesses: One (minor) drawback is that the notion of privacy defined for the adaptively chosen queries is (necessarily) not as strong as would be nice since the label for a data point needs to depend on the data point to achieve any reasonable notion of accuracy (since otherwise you can only do something like randomized response).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1) The reasoning for the use of LabelBoost didnt entirely make sense to me- the stated reason was that the predictions provided during a round are not consistent with any concept in the hypothesis class. However, it’s not immediately clear to me that this is a problem- for example, using agnostic (non-private) learning algorithms may get around this issue (at the expense of worse sample complexity dependence on the accuracy parameter). More explanation of this would be useful.
2) The comment about their model of prediction, in principle, allowing memorization was interesting. It’s not clear that this is true- it seems like memorizing something in the training set would affect future predictions which would affect the adversary’s views. Would love more clarification on this.
3) Is there a simple (eps, 0)- version of this algorithm? It’s not clear to me that approx DP is necessary (sparse vector by itself is pure DP, though a variant is used in this paper).
4) The diagrams in the privacy proofs are a little too convoluted to be useful to the reader. I don’t have great suggestions on how to simplify them, but the privacy proofs by themselves are relatively simple, and I’m not sure the diagrams make them simpler to understand.
5) Some minor comments:
a) In Algorithm LabelBoost step 4, ‘choose’ based on exponential mechanism is not well defined since the score function is not specified (it can be inferred from context that it’s the negative of empirical risk) but specifying would be good.
b) In Claim E.4 don’t you need to account for the fact that the function labeling the sample is not the real function, but rather the one chosen by LabelBoost? Specifically, Claim E.3 is about a function with 0 sample error, whereas LabelBoost will incur some error, and hence I think you want a version of claim E.3 where sample error is bounded by alpha.
c) On page 8, where it says ‘using standard privacy amplification arguments, BetweenThresholds can be modified to allow for c times of outputting bot before halting)’, I think you mean privacy composition.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Limitations adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. Below we address the points you make:
**> the notion of privacy defined... is not as strong as would be nice since the label for a data point needs to depend on the data point**
We don’t see the fact that the adversary does not get to see the label of one point (where the inputs differ) as a serious weakness - this is a necessity due to the characteristics of the problem, and it makes sense in may applications of private prediction (e.g., when prediction is used to support a decision whether to give a patient a certain treatment, it is OK for the patient to learn the result of applying the prediction to their own medical information; what is important is that privacy of other patients needs to be preserved). Note that the adversary we consider is nevertheless very powerful. In particular, after the single point where it does not see the label it can still adaptively choose an unbounded number of points and learn their labels.
**> The reasoning for the use of LabelBoost didnt entirely make sense to me**
In our construction we need to ensure that the sum of $\alpha_i$ converges, so that the overall error is under control. LabelBoost helps here because its utility guarantees are better than what we can guarantee during "runtime", i.e., when responding to the queries online. We do not know if LabelBoost is necessary for this, but so far we could not get the error to converge without it.
**> The comment about their model of prediction, in principle, allowing memorization was interesting. It's not clear that this is true- it seems like memorizing something in the training set would affect future predictions which would affect the adversary's views**
Memorization need not be of just the initial training set but also of the query points posed to the predictor. What we meant to say is that our construction yields private predictions regardless of how the internal non-private learners operate, and even in case they memorize points of the training set or points presented as queries.
A possible example for this would be the class $C^{enc}_{thresh}$ (described in Item 1 in the same section). Bun and Zhandry [2016] showed that privately learning this class is computationally hard, whereas non-private learning is easy. This difference actually follows from the private learner's inability to memorize data points from the training set. As we mentioned in Item 1, our construction yields an efficient learner for this class (in our privacy model). The resulting learner would not leak to the adversary information about data points, but it would still memorize some of them internally. We will include this observation in the final version.
**> Is there a simple (eps, 0)- version of this algorithm?**
We don't know whether pure-DP everlasting predictors exist. Our analysis strongly relies on the advanced composition theorem, and hence yields approximate-DP.
**> The diagrams in the privacy proofs are a little too convoluted to be useful to the reader**
Thank you for the comment. We will consider whether to keep the diagrams (or simplify them).
**> In Algorithm LabelBoost step 4, 'choose' based on exponential mechanism is not well defined (it can be inferred from context)**
Right. We will state this explicitly.
**> In Claim E.4 don't you need to account for the fact that the function labeling the sample is not the real function, but rather the one chosen by LabelBoost?**
Yes, the samples are labeled by the concept chosen by LabelBoost. We argue that this concept has low error compared with the original concept in claim E.3
**> On page 8... I think you mean privacy composition**
Yes, we use advanced composition because BetweenThreshold will halt within c times with high probability.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Importance of Exploration for Generalization in Reinforcement Learning | Accept (poster) | Summary: The paper motivates the importance of exploration for generalization in contextual MDP with a tabular example, where the context is either the starting state or an uncontrollable random deviations in the transition model. The authors introduce an exploration method for distributional DQN, called EDE, which estimates the epistemic uncertainty of the network as the variance of quantiles between ensemble members. They use this uncertainty in an UCB-style exploration algorithm with varying exploration coefficients, called TEE. EDE/TEE is evaluated on the ProcGen baselines and on the Crafter environment. Results indicate that the average median over test MDP of all environments is higher than baselines with epsilon greedy exploration.
Strengths: The paper is very well written and contains an enormous amount of references, for which the authors are to be commended! The insight that exploration has a larger role for generalization in CMDP than just finding the path to the largest reward (as it is in single MDP) is very interesting and well made. The presented exploration method makes sense and outperforms epsilon-greedy baselines in some of the ProcGen baselines and probably also in Crafter.
Weaknesses: The biggest issue with the paper is that its two parts (exploration for CMDP and EDE) do not really have much to do with each other. EDE/TEE is a general exploration algorithm, which does not use any of the insights from the first part to explore specifically for CMDP. Furthermore, the first part could be more precise for which CMDP exploration is helpful (in the presented analogy), and the second part is not all that novel. Finally, the experiments are missing important baselines and the results of EDE do not seem to be significantly better. To clarify these criticisms:
- The insight given in Figure 1 is very nice, but *only* applies to CMDPs where the context is different initial states. The example given in Figure 11 goes beyond this restriction, but it remains unclear what exactly the class of CMDP that can be solved by exploration is. It appears that either the context is fully observable in the state, in which case the tabular analogy breaks down, or the assumption that all $\mathcal M \sim q_{\mathcal M}$ have the same optimal policy $\pi^*$ does not apply to most CMDP, including the ProcGen benchmarks. It also does not help that the authors phrase uncertainty about an unobserved context as aleatoric uncertainty (which is a reasonable classification), as the existence of $\pi^*$ would make the distinction effectively meaningless.
- EDE is based on the ensemble disagreement (variance) of the output of a distributional RL algorithm (QR-DQN). While the reviewer could not find a paper that does this *exact* combination, it is a very straight forward combination of many papers that pair ensemble-disagreement for exploration (many cited by the authors) with quantile regression DQN. While not in the context of exploration, other papers have already made this combination for risk-avoiding RL (Eriksson et al., 2022; Hoel et al., 2023). In its defense, EDE/TEE uses some non-standard components, like every head being trained by their own minibatch, instead of subsets of the same minibatch as in [78, 80, 81], and different exploration parameters for different actors. These are not new, though, and the overall novelty of EDE/TEE seems low.
- What is the connection between the first and the second part of the paper? It seems that every exploration algorithm would help this type of CMDP generalization, so why did the author specifically propose EDE? This seems to be a missed opportunity, as the first part could be strengthened by a clearer definition of exploration, too. For example, most exploration methods (including TEE) optimize a trade-off between value and epistemic uncertainty, whereas others like task-agnostic exploration [87, 122] only maximize entropy or minimize uncertainty. Which type of exploration is the right one for CMDP generalization?
- Does EDE ever use the aleatoric uncertainty $\sigma_\text{ale}$ in any meaningful way? Is there any significant difference between EDE and a DQN ensemble trained in the same manner? The reviewer doubts that the variance of a DQN ensemble is more tainted by aleatoric uncertainty than one made of QR-DQN. Figure 5 seems to suggest this, but is DQN-UCB/TEE also been trained in the exact same way as EDE?
- It is commendable that the authors ran a range of comparisons in Figure 5, but it is surprising that in Figure 5 EDE seems to have a significant advantage in all quantities, but against PPO/IDAAC there seems to be no advantage in mean and optimality gap. This could be accurate, but the reviewer still recommends the authors to check their code again. Which type of exploration did the PPO and DAAC baselines use here exactly? The reviewer also would have liked a comparison with intrinsic reward methods or UBE [82], as TEE only takes local uncertainty into account, which has proven to be suboptimal for exploration.
- Finally, the conclusions drawn from the presented results appear questionable. Improving over PPO (assumingly without advanced exploration) w.r.t. median but not mean is suspicious and looking at the learning curves in Figure 13 shows that EDE significantly beats PPO in 5 environments, but significantly looses against PPO in another 5 environments (there seems to be no significant difference in the other 6 environments). This does not look like a significant improvement by EDE! Moreover, the story of the paper is that exploration helps generalization. Can you also plot the generalization gaps for all algorithms/environments in Figure 13?
**Additional References**
- H. Eriksson, D. Basu, M. Alibeigi, and C. Dimitrakakis. Sentinel: taming uncertainty with ensemble based distributional reinforcement learning. In Uncertainty in Artificial Intelligence, pages 631-640. PMLR, 2022.
- C.-J. Hoel, K. Wolff, and L. Laine. Ensemble quantile networks: Uncertainty-aware reinforcement learning with applications in autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 2023.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Can you specify precisely under which conditions exploration is useful for tabular CMDP?
2. What is the connection between the first and the second part of the paper? Wouldn't every exploration algorithm help this type of CMDP generalization?
3. Which type of exploration is the right one for CMDP generalization (i.e. for your answer of Q1)?
4. Which type of exploration did the PPO and DAAC baselines use?
5. How does the generalization gaps in Figure 13 look like?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: It is unclear which exploration is best for CMDPs and whether the performance of EDE is significantly better than the baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and insightful questions! We were glad you found our insight interesting and well made and our literature review extensive.
> Connection between two parts (Q2)
As we explain at the beginning of Sec 4, the first part is meant to show that good exploration can help generalization even in tabular MDPs w/o deep learning. The second part shows that applying this insight to deep RL requires a more careful design of the exploration strategy since empirically not all exploration methods help in more complex environments.
> novelty
We believe that we have been straightforward with the fact that many components of EDE have been studied in prior works. The main claim we are making is that they can be used to improve generalization significantly. We believe this claim is novel and substantiated by our empirical results. Please see the common response for more details on the core contribution of our paper.
> Intrinsic motivation and UBE
Please see our general response. We believe that the baselines are representative of general-purpose exploration methods. For example, UBE belongs to the family of Thomspon sampling algorithms following Bootstrapped DQN.
> Fig 1 and conditions on CMDPs (Q1)
The goal of Fig 1 is a high-level intuition on how exploration can help generalization in RL. Characterizing CMDPs (which is a very large class of problems) is an open area of research. Given the complexity of the problem, we believe it is beyond the scope of our paper. For different initial states, good exploration should generally help. We argue this analytically in Appendix A. For different dynamics, the story is more complicated. “The change in transition is not highly correlated with the action” could be a good condition, but we do not have a precise theoretical characterization. Finally, for different rewards, very strict assumptions are needed to ensure that generalization is possible. Please see [1] for more analysis.
> existence of $\pi^\star$
Thank you for pointing this out. The precise statement [1] is that $\pi^\star$ achieves nearly optimal performance (up to $\alpha$) on all MDP: $V^{s_0}\_{\mathcal{M}}(\pi^\star) \geq \max\_\pi V\_\mathcal{M}^{s_0}(\pi) - \alpha, \forall \mathcal{M}$ i.e., we assume there exists a policy that does reasonably well on all MDPs of interest without re-training on the unseen MDP. This assumption only ensures that the problem is solvable, but the RL algorithm does not rely on this assumption in any way (i.e., it will find a bad solution if no good solution exists). We will clarify this in the revision.
> Use of aleatoric uncertainty
EDE uses the aleatoric uncertainty by actively ignoring it. This is distinct from risk-sensitive RL which aims to avoid undesirable outcomes by avoiding actions with high aleatoric uncertainty. This is an important topic but not one that we focus on. Future works could combine both (e.g., avoiding high aleatoric uncertainty during test time for better test performance in environments where bad actions can have large penalties).
> EDE vs DQN ensemble
Both DQN ensemble and QRDQN ensemble are not trained on separate minibatches, so DQN+UCB and DQN+TEE are directly comparable to QRDQN+UCB and QRDQN+TEE. We see that QRDQN variants improve upon DQN variants, highlighting the need for aleatoric uncertainty. Without this separation, the variance of DQN ensemble will contain both uncertainties. In Fig 16c, we show an ablation of QRDQN+UCB with both uncertainties and it does significantly worse which means the separation is important.
> Comparison against PPO and checking code
We use the standard rliable [2] for the results so it's unlikely that there is a bug.
> Difference in mean and median
It is well-known in statistics that the median and mean can have different ordering. We have not claimed that EDE outperforms PPO in every game and, the lower mean is because, as you have noticed, there are some games where all value-based (VB) methods perform much worse than policy-based (PB) methods (due to reasons unrelated to exploration) and mean is sensitive to extreme values.
When there are many games, it is customary to report robust summary [2]. Research on Atari shows that rarely does a single algorithm outperform prior works in all games. We agree that it is valuable to understand why VB approaches fail in particualr games, but we don't think this contradicts our claim that the EDE outperforms PPO on Procgen overall, in the sense that the standard RL literature uses the term "*outperform*". We already comment on this in the caption of Fig 14 but would be happy to emphasize it.
> Exploration policy of PPO/DAAC (Q4)
They use the standard Boltzmann policy to explore.
> What kind of exploration (Q3)
The type of exploration we support in this paper is the classical optimism-based that balances value and uncertainty since there is a well-defined task. Please see the common response for more details.
> Generalization gap (Q5)
As pointed out in the footnote of page 3, the notion of generalization we focus on is **test return after a fixed number of interactions** [3] because the generalization gap (gap) can be misleading (e.g., a random policy has 0 gap). In supervised learning, the gap is meaningful because almost all models can fit the training data perfectly, but this is not the case for RL. In terms of gap, EDE is on par with other methods.
**References**
[1] When Is Generalizable Reinforcement Learning Tractable? Malik et al.
[2] Deep Reinforcement Learning at the Edge of the Statistical Precipice. Agarwal et al.
[3] Leveraging Procedural Generation to Benchmark Reinforcement Learning. Cobbe et al.
**Conclusion**
We hope our answers and proposed revision have adequately addressed your questions and concerns and that you will consider raising your score. Please let us know if there is anything preventing you from recommending acceptance and we will respond to any outstanding questions.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their thorough answers. However, the reviewer still has some doubts and open questions:
- Which exploration does QRDQN use in Figure 13?
>Our experiments also show that many exploration methods which were designed to improve performance on a single training environment (e.g., NoisyNet, ez-greedy, bootstrapped DQN etc.) are insufficient when applied in such settings. This motivated the design of EDE, [..] we think the algorithmic contribution is secondary to the conceptual message as algorithms can always be improved.
>The second part shows that applying this insight to deep RL requires a more careful design of the exploration strategy since empirically not all exploration methods help in more complex environments.
> [..] many components of EDE have been studied in prior works. The main claim we are making is that they can be used to improve generalization significantly.
- You claim that many exploration methods are insufficient for CMDP generalization. So what *are* the properties of exploration methods that help them to cope with complex CMDPs? Which properties of prior "insufficient" methods are preventing generalization in CMDPs? This is maybe the most important question for the paper, as the authors have made it their main contribution. It is the difference between an interesting observation (workshop contribution) and an insight others can build upon (conference contribution).
>We believe that the baselines are representative of general-purpose exploration methods. For example, UBE belongs to the family of Thompson sampling algorithms following Bootstrapped DQN.
- I respectfully disagree. UBE accumulates *future uncertainty*, which is different from making decisions as an optimistic trade-off between estimated return and *local uncertainty*, which is what all evaluated methods rely on. UBE is almost identical to intrinsic reward approaches (giving additional reward for local uncertainty) with a separate uncertainty head (which can also use TD($\lambda$) bootstrapping for faster propagation), which are therefore (not only in totally sparse environments) a very different class of exploration methods that should be compared against. Bootstrapped DQN has a similar "long-term" feature to UBE when each ensemble member gets bootstrapped using it's own value prediction (in difference to the average), but operates very differently (choose one value function for the entire episode).
>We see that QRDQN variants improve upon DQN variants, highlighting the need for aleatoric uncertainty. In Fig 16c, we show an ablation of QRDQN+UCB with both uncertainties and it does significantly worse which means the separation is important.
- QRDQN is known to improve over DQN in many environments, which is consistent over all presented exploration methods in Figure 5 and e.g. in [25]. So far there is no consensus in the community why that is. If I understand Figure 16c correctly, you compare using epistemic uncertainty with epistemic+aleatoric uncertainty as an exploration bonus. Adding additional uncertainty that is (conceptually) not useful for exploration is obviously worse, and not an argument for the claimed distinction between epistemic and aleatoric uncertainty.
The underlying issue is that, to the detriment of exploration in heteroscedastic environments, epistemic uncertainty estimates *contain* (are "entangled with") aleatoric uncertainty. The variance of an average of $n$ independent random variables $X_i$, with $Var(X_i)=\sigma^2$, is $Var(\frac{1}{n}\sum_{i=1}^n X_i) = \frac{\sigma^2}{n}$. The epistemic (reducible) uncertainty of e.g. averaged returns is therefore proportional to $\frac{1}{n}$, but also contains the aleatoric uncertainty $\sigma^2$. This is a problem if different action, and thus different futures, have different aleatoric uncertainty. See [4] for an (imperfect) attempt to remove aleatoric uncertainty for exploration, that would also make an interesting baseline. Do the authors maintain their claim that ignoring the distributional variance (as an estimate of aleatoric uncertainty) in QRDQN is sufficient to remove the "entanglement" of aleatoric and epistemic uncertainty (which is indeed a problem in exploration)?
>Both DQN ensemble and QRDQN ensemble are not trained on separate minibatches
- This sounds like a more likely explanation for EDE's performance. Have you run an experiment where e.g. QRDQN+UCB or QRDQN+TEE is trained on separate minibatches? This distinction is important because you claim that EDE, which is mostly a combination of known techniques with some unusual implementation details, is superior. It is therefore important to localize which change improved performance. The reviewer is missing a compelling theoretical justification that explains why EDE explores better, or for which situations it explores better. Can you provide (and substantiate) such an explanation?
[4] Nikolov et al. (ICLR 2019). URL https://arxiv.org/abs/1812.07544
---
Reply to Comment 1.1.1:
Title: Thank you for the prompt reply! (1/2)
Comment: Thank you for the prompt and thorough reply, and for engaging with us in the discussion! We are glad that we are able to resolve some of your concerns. We will try to respond to your remaining questions. Once again, we would be happy to discuss further.
> Which exploration does QRDQN use in Figure 13?
QRDQN in Figure 13 uses epsilon greedy.
> So what are the properties of exploration methods that help them to cope with complex CMDPs? Which properties of prior "insufficient" methods are preventing generalization in CMDPs?
This is a great question. As we pointed out in the paper, we hypothesized that the insufficiency comes from inadequate uncertainty estimation, although we acknowledge that the definition of inadequacy can be nuanced for RL.
To accomplish this better uncertainty estimation, we first try to separate the two uncertainties. The reasoning is that the source of aleatoric uncertainty in generalization comes from unobserved context which could lead to high variance. We believe this is a rich source of aleatoric uncertainty so modeling them separately is important (perhaps more so than Atari where the main source is sticky action). Of course, as you have noted, distributional RL has other benefits that we do not fully understand, and this separation is not perfect (and hard to achieve), but in practice, it seems to be reasonably effective on the benchmark.
The second component is using deep ensembles instead of the existing approximate posterior sampling techniques. It is well-known in supervised learning that deep ensembles are much better at uncertainty estimation (e.g., calibration) than traditional techniques because they can better explore different parts of the function space [1]. We hypothesized that this should be beneficial for deep RL too. Note that why deep ensemble works so much better is also largely an empirical observation even though this observation is extremely robust.
> UBE accumulates future uncertainty
As we acknowledged in the general response, it is infeasible for us to compare EDE to all possible baselines in the literature (especially when they have never been applied to Procgen due to the dominance of policy-based methods) and even the same principle can be realized very differently. We are glad that you agree UBE shares important similarities with Bootstrapped DQN (i.e., long-term/deep exploration). We want to note that it's not clear that UBE outperforms bootstrapped DQN even in Atari since the authors explicitly omitted this comparison in their paper. In addition, as far as we are aware, UBE is also not a commonly-used baseline in the exploration literature, whereas bootstrapped DQN is. Bootstrapped DQN is cited 1227 times whereas UBE is cited 181 times and, in our search, we could not find a public implementation of UBE on Atari or a well-known method that uses UBE as their baseline.
> heteroscedasticity
If we understand you correctly, you are saying that the estimation of epistemic uncertainty is not accurate due to the sequential nature of MDP, since you agree that using aleatoric uncertainty cannot possibly be helpful for exploration. **At the top of page 6 (starting from line 193), we have already acknowledged that the estimate we are computing is biased due to the nature of RL, but empirically biased estimation can still help in practice if it is informative for exploration. So we have not claimed that ignoring the distributional variance fully resolves the entanglement but only that it alleviates the problem.** We are not the first to make this claim as previous works have observed that this helps for MinAtar [2]. The heteroscedastic noise of RL is indeed a very important problem in RL but it is not the problem that we are trying to solve in this work. We will highlight this more in the revision and add citations accordingly.
Regarding the effectiveness of the separation (this also relates to your next question), we would like to highlight the change in performance going from QRDQN to QRDQN+UCB and the change in performance going from DQN to DQN+UCB. As you can see, adding UCB to QRDQN improves the median performance (whereas the other two metrics stay relatively the same) but adding UCB to DQN significantly **hurts** the performance. Since they both use the same ensemble technique, we believe that this difference can be attributed to the type of exploration bonus used. In DQN+UCB, we use the variance of different DQN heads which contain both types of uncertainties. In QRDQN+UCB, we use (biased) epistemic uncertainty. While QRDQN generally outperforms DQN, it cannot explain why UCB would lead to worse performance for DQN on Procgen. One sensible hypothesis is that the modeling aleatoric uncertainty is the cause of this difference. | Summary: This paper proposes an exploration method for value-based RL for contextual MDPs (CMDP) motivated by the idea that good generalization in RL requires attention to RL specific problems such as exploration. The method uses an ensemble of quantized Q-functions to estimate the epistemic uncertainty about the value function. Then uncertainty estimate is used to conduct upper confidence bound (UCB) exploration based on the estimated distribution of the Q-values. Additionally, the algorithm samples more diverse data by using different exploration weights for the different actors. The proposed method achieves better performance than previous value-based methods on procgen, where it is also competitive with policy optimization-based methods. On crafter, the proposed method achieves performance on par with the state of the art method with much smaller network.
## Acknowledgment
I have read the rebuttal and the following discussion and updated my review accordingly. Concerns raised by other reviewers convinced me to lower my rating by one grade.
Strengths: ## Originality
This paper presents a well executed study on generalization for value-based RL algorithms in CMDPs. Other studies of generalization in CMDPs have been presented in previous work, but the focus on value-based RL and exploration seems like a relatively unexplored area. The proposed method is a novel combination of existing ideas from previous work. It is well motivated by the presented analysis and achieves state-of-the-art results for value-based RL in challenging CMDPs, so presenting this combination seems like a potentially important contribution to the RL community.
## Quality
The study of exploration as a driver for generalization is of high quality. It starts from illustrative examples and applies that intuition to more complicated cases. The paper hits a good balance in what level of detail to cover the background information. The proposed method is motivated by the analysis and each of the design decisions are carefully analysed in the empirical section. The empirical results are exceptionally thorough and well presented.
## Clarity
The paper is well written throughout. It explains its ideas at a helpful level of detail. The illustrations and figures are clear and well designed. The results for procgen are presented using a style that has been promoted as the best practice in communicating RL results.
## Significance
Value-based RL is an interesting topic and for various tasks, value-based RL algorithms are found to be state-of-the-art. Therefore, contributions that improve the generalization ability of value-based RL algorithms for CMDPs can be significant contributions to the RL community.
Weaknesses: ## Discussion on exploration-exploitation tradeoff
- The paper proposes exploration as a critical component for achieving generalization for CMDPs. Choosing a more exploratory policy always introduces an exploration-exploitation tradeoff, yet such tradeoff is not discussed in the paper.
- The illustrative examples assume a distribution shift between training and test time MDPs. If there was no distribution shift, using a more exploratory policy would result in an exploration-exploitation trade-off that needs to be balanced. It would be good to comment on how the presented intuition is still relevant when there is no distribution shift.
## Other weaknesses
- The results for the Crafter are presented with less detail than the Procgen ones. I would have appreciated learning curves and the median/IQM/... plots for Crafter as well.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In CMDP settings, where the training MDP distribution is the same as the testing one, would exploration still be the most important driver of generalization performance?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper discusses relevant limitations to the proposed algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the generous review and strong support of our paper! We were excited to hear you found our paper to be an "important contribution to the RL community" and the empirical results to be "exceptionally thorough and well presented".
>exploration-exploitation tradeoff
>
Indeed this is a fundamental problem in RL. In our setting, we don’t think the tradeoff is significantly different from other RL settings. Given a fixed time budget, one would need to choose the extent to which you want to explore before settling on a reasonable solution. Both over-exploration and over-exploitation would result in poor generalization in addition to poor training performance. This is why we chose optimism-based exploration which tries to balance between the two automatically. In EDE, this is controlled by the hyperparameter $\varphi$ (w/o TEE) or $\alpha$ and $\lambda$. That being said, in practice, we observed that EDE is not particularly sensitive to these hyperparameters. You can find this ablation in Figures 17 and 18 in the appendix. We will also add more discussion about this to the paper.
> IQM for crafter
>
The baseline results are taken from the original Crafter and other papers that use Crafter which do not use IQM so we wouldn't be able to compare to them.
> Distribution of MDPs (Q1)
>
Suppose the full distribution contains the two starting states with equal probability and the training MDP is a single sample from the environment starting from the top left corner, the population performance would be just the average of the two curves (blue and orange) and UCB would still perform much better because the training performance (blue curve) is similar for both methods. In this case, there is no distribution shift and the conclusion still stays the same.
**Conclusion**
We thank you again for your feedback and strong support of the paper. We hope our answers have addressed all your questions but let us know if you have any outstanding concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
I have read the other reviews and the discussion between the authors and the reviewer dHNU. dHNU has successfully drawn my attention to some weaknesses of the paper, which I had overlooked in my review. As dHNU suggests, I may have been swayed by the excellent presentation to ignore some aspects of the scientific content. I still think the paper is worth acceptance to NeurIPS, and am willing to argue for it. At the same time I would not be massively disappointed if it got rejected. I am currently leaning toward lowering my score slightly, but am waiting to see how the discussion continues to unfold.
Please see my responses to dHNU on the thread started by the AC.
---
Reply to Comment 1.1.1:
Comment: Thank you for getting back to us.
Since the thread you are referring to is not visible to us and the discussion period is not very long, we were wondering if you could let us know your specific concerns so we can address them. Specifically, if you find any of our responses to Reivewer dHNU not convincing, we would be curious to hear why and happy to discuss them further.
Many of the borught-up weaknesses are already discussed in the paper, but if there are places where we could do better, we would be happy to revise the paper.
Finally, the paper's goal is to use exploration to improve generalization in RL (particularly for value-based methods, where we do improve upon the baseline significantly), not to solve risk-sensitive RL or heteroscedastic noise in RL. We feel that it is unreasonable to criticize a work for problems that the work did not claim to address in the first place. | Summary: This paper introduces a method called EDE (Ensemble Distributional Exploration) that promotes the exploration of states with high epistemic uncertainty through an ensemble of Q-value distributions. The authors evaluate EDE and compare it to several baselines on Procgen and Crafter.
Strengths: **Originality**
The paper's originality lies in its approach to exploration in reinforcement learning. The authors introduce the concept of using ensembles for uncertainty estimation to guide exploration.
**Quality**
See Weaknesses Section.
**Clarity**
The paper is well-written and organized, making it easy for readers to follow the authors' thought process and understand the methodology and results.
**Significance**
By demonstrating that ensembles for uncertainty estimation can effectively guide exploration, the authors have opened up new possibilities for exploration in reinforcement learning. The paper's focus on improving an existing method for better exploration is particularly relevant in today's context, where there is a growing emphasis on improving the efficiency and effectiveness of reinforcement learning.
Weaknesses: One potential weakness is the increased computational cost associated with the method. The use of an ensemble to guide exploration, while useful, also makes the method more expensive to run. This could limit its applicability in scenarios where computational resources are constrained.
In Figure 5, it would be beneficial to add QR-DQN with Thompson Sampling in the ablation.
Furthermore, in Table 1 it would be more informative to evaluate QR-DQN, and EDE (with UCB and TEE instead of Thompson Sampling).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Could you provide some insight into the decision to use Thompson Sampling in the Crafter experiment? Specifically, why was this approach chosen over UCB combined with TEE? Understanding the rationale behind this choice would be beneficial.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for your valuable feedback! We were glad to hear you think our paper "opens up new possibilities" and is "particularly relevant in today's context".
> computational cost
>
We agree that this is an important topic. Estimating uncertainty in an efficient manner is an active area of research in machine learning. In supervised learning, methods such as epinet [1] could significantly reduce the computational cost while providing good uncertainty estimates, so we believe it could also be useful in RL. For this work, we chose to use deep ensembles because it’s a commonly used technique that is robust and easy to implement.
Furthermore, the cost of ensembles occurs mostly at training time, and at test time, the additional cost is much smaller due to the shared feature extractor. Since ensembles can achieve much better results, this cost may be justified depending on the application as it only adds a very small additional cost at inference time.
> Thompson sampling vs UCB
>
As mentioned in the paper, for Crafter, the environment is not naturally parallelized so we cannot easily use TEE. In this setting without TEE, we observed that TS outperforms UCB. Since our computational budget was limited, we chose to focus on the method that does better in each setting. In the provided source code, both settings can be used for those interested. For procgen, in our preliminary experiment, QR-DQN+TS+TEE did not perform as well as QR-DQN+UCB+TEE. Since evaluating the entire Procgen suite is expensive (~250 GPU days for all the experiments), we did not conduct the full experiment for TS and chose to conduct other ablation and baselines. For bandits, UCB and TS are actually very similar algorithms (in spirit at least), and they have similar regret but their actual performance can be problem-dependent [2] so it's hard to say one will always outperform the other. We will make this more clear in the revision.
**References**
[1] Epistemic Neural Networks. Osband et al.
[2] An Empirical Evaluation of Thompson Sampling. Chapelle et al.
**Conclusion**
We hope our answers above have addressed your questions sufficiently well to alleviate your concerns regarding the paper. Please let us know if there is anything standing between us and a strong recommendation of acceptance.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thoughtful rebuttal, which has addressed most of my concerns. I will continue to maintain my current score at this stage. | Summary: The paper proposes that effective exploration is important for generalization and shares a value-based method which gets good generalization performance on procgen and outperforms Rainbow on crafter.
Strengths: originality: The idea to leverage improved exploration to improve generalization is novel/seldomly explicitly tackled.
quality: The evaluation is very high-quality.
clarity: The paper is fairly clear. I like starting with a tabular MDP to motivate that exploration improves generalization. The methods section is dense but clear.
Weaknesses: Figure 1 could be improved to show some abstraction for how the type of exploration they propose can improve generalization.
The method is only evaluated on two domains. It could be improved if evaluated on more domains.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: You say this is the first value-based method to get good performance on procgen. The Q-learning baseline from [1] simply adds learning a value-equivalent model as an auxiliary task (but does not use it for planning) and gets pretty good performance. Can you discuss the difference and how your method does better than theres? Is it still true that yours is the first value-based method to get this type of performance on procgen? MuZero also does quite well and is value-based.
The authors compare to LSTM-SPCNN, note that LSTM-SPCNN is a "specialized architecture" but LSTM-SPCNN is just a visual-transformer like architecture with PPO? So it seems fairly general. Why is the comparison is there?
Of the *many* exploration algorithms that are available to compare against, why did you choose these particular ones?
While I like the idea of improving generalization by improving exploration, do you provide evidence that bad exploration is indeed the reason for poor generalization?
It's hard to judge this paper without answers to the above questions.
Why did you choose crafter over other domains? I could imagine using the mini-grid environments which seem like more relevant environments since you can explicitly setup experiments that test both exploration and generalization.
[1]: Procedural Generalization by Planning with Self-Supervised World Models
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: There is a single line on limitations (expensive due to ensembling) which is insufficient in my opinion.
Will this method improve exploration & generalization in 3D environments like Mine Craft? How about mazes like mini-grid? Why or why not?
Are there different conditions under which you expect the epistemic uncertainty estimate to be more-or-less informative?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for taking the time to review our paper and the detailed questions! We were glad to hear you found the idea novel and the evaluation of very high quality. We hope our answers below will address your remaining concerns.
> Figure 1 could be improved to show some abstraction
Figure 1 is meant as an accessible illustration of one possible intuition/interpretation (which other reviewers have found effective), rather than a full description of our method. We opted for simplicity as we were concerned that illustrating uncertainties would result in an overly-complicated diagram which may confuse readers.
> more domains
>
Respectfully, we believe this is quite an extensive evaluation of the algorithm. The procgen benchmark already contains **16 distinct games**. Since this is the standard benchmark for studying generalization in RL, we believe this choice is appropriate. The experiments and ablations for these games alone take more than 250 GPU days which are quite extensive in our opinion. On top of that, Crafter, which was designed to resemble Minecraft, adds another domain that poses very different challenges from Procgen. We believe our experiments on both Procgen and Crafter offer sufficient evidence to support our main claims and emphasize the generality of our approach.
> Q1: QL agent from [1]
Thank you for going through the reference and pointing out this one. The QL agent from [1] performs on par with PLR which is worse than IDAAC [2]. In our experiments, our method outperforms IDAAC which suggests that we should outperform [1] as well. Further, we do not claim that exploration is the only way to improve generalization. The main difference is that the QL agent used in [1] uses additional self-supervised learning objectives. This is orthogonal to our contribution, so combining these methods may lead to further improvements. Also note that [1] uses a different setup than we do. Our work adopts the standard setup with 25M training steps on 200 levels from the ``Easy`` mode which allows for direct comparisons to existing methods, whereas [1] uses 30M training steps on 500 levels from the ``Hard`` mode which is not commonly used in the literature.
> Q2: LSTM-SPCNN
LSTM-SPCNN is not a vision transformer (the transformer-based architecture does worse than LSTM-SPCNN). It is a CNN that does not reduce the spatial dimension via striding (hence the ~100x more parameters than QRDQN’s architecture) because striding would lose information about the backpack in the bottom of the frame in crafter which is important for doing well, so LSTM-SPCNN is specialized for crafter.
> Q3: the choice of baselines
Please see our general response at the top.
> Q4 evidence that bad exploration is indeed the reason for poor generalization
In Figure 16c of the appendix, we showed that if one targets both aleatoric and epistemic uncertainty in EDE, the performance is significantly worse, suggesting that the type of exploration indeed affects performance. Nonetheless, there can be many reasons for poor generalization so even a good exploration policy will not always result in better performance if these other components are not appropriate (e.g., poor optimization).
> Q5 Crafter
Crafter is designed to be an extremely challenging environment that requires, among other things, strong generalization and exploration, so we believe it is very relevant for our study. On the other hand, there is no standard benchmark for generalization in mini-grid that everyone uses. In comparison, Crafter is more challenging and has a standardized benchmark to allow for easy comparison and shows how far we can get just by doing better exploration.
> Will this method improve exploration & generalization in 3D environments like Mine Craft? How about mazes like mini-grid? Why or why not?
We believe that it would help in environments like Minecraft since the structure of Crafter is designed to resemble Minecraft (e.g., crafting tree, resource collection and survivial). The 3D part of Minecraft is more challenging for reasons that are less related to exploration (e.g., representation learning). For mazes, it is less clear whether exploration during training is the main bottleneck (it could still be useful during test time). In procgen, there is a maze game. In this setting, we observed that EDE did not improve QR-DQN significantly. We hypothesize that this is due to the fact that mazes require good long-term planning which requires better representation rather than exploration. We commented on this in the caption of Figure 13 but we would be happy to add more discussion.
> Are there different conditions under which you expect the epistemic uncertainty estimate to be more or less informative?
CMDPs are a large and underexplored class of problems so we do not have an exact theoretical characterization right now about when we can expect epistemic uncertainty to help, but it should not hurt in general (it improves over QR-DQN in almost every game). We would be happy to provide some speculations if you think that's helpful.
For different initial states, good exploration should generally help. We argue this analytically in Appendix A. For different dynamics, the story is more complicated. “The change in transition is not highly correlated with the action” could be a good condition, but we do not have a precise theoretical characterization. Finally, for different rewards, very strict assumptions are needed to ensure that generalization is possible. Please see [1] for more analysis.
**References**
[2] Decoupling Value and Policy for Generalization in Reinforcement Learning. Raileanu et al.
**Conclusion**
We hope the above clarifications and proposed revision have adequately addressed all your questions regarding the paper. If there are no outstanding concerns, will you consider raising your score to provide strong support for our paper?
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thank you for your response.
> Figure 1 is meant as an accessible illustration of one possible intuition/interpretation (which other reviewers have found effective), rather than a full description of our method. We opted for simplicity as we were concerned that illustrating uncertainties would result in an overly-complicated diagram which may confuse readers.
Given other reviewers (e.g. dHNU), I still think Figure 1 is sub-optimal for your goal of motivating exploration for Deep RL.
> main difference is that the QL agent used in [1] uses additional self-supervised learning objectives. This is orthogonal to our contribution, so combining these methods may lead to further improvements.
I agree. I still think the claim "this is the first model-free value-based method to achieve state-of-the-art performance on these benchmarks" is misleading since they were also a model-free value-based meth. I would remove this from the text. Good point that they study the `hard` setting with 300M training steps instead of 25M training steps.
> LSTM-SPCNN is specialized for crafter.
I disagree that LSTM-SPCNN is specialized to crafter since it can be applied to any other vision-based RL domain. Though I agree that it was designed to target crafter by using patches with attention. Does some part of its design uses crafter-specific details?
While I appreciate the complexity and variety of crafter + procgen for your experiments, I think it does make the connection to section 3 even less clear. I think some visualization across any of the Deep RL domains you study which showcased the benefit of exploration for generalization would dramatically improve this paper.
I think reviewer dHNU did a good job of pointing out this papers limitations. I keep my current score.
---
Reply to Comment 1.1.1:
Title: Clarifying a misunderstanding
Comment: Thank you for getting back to us. We would like to first clarify a misunderstanding as it seems to be a central discussion among the reviewers. The goal of figure 1 and section 3 was not to motivate EDE but to motivate why exploration might be helpful for generalization in RL. EDE's algorithmic designs require additional motivation which we explained in the paper. We have never intended to suggest that we derived EDE from section 3 and figure 1. Indeed it would be hard if not impossible to visualize what happens in a neural network in a tabular environment or an illustrating figure. If there are places in the text that may suggest this is the case, please let us know so we can change it.
The flow of logic is that “*Figure 1 and Section 3 demonstrate that there exist situations in RL where better exploration can help generalization because this is not well-known before -> Section 4 shows that we can design an exploration method that drastically improves generalization in deep RL*”, **not** “*The method in Section 4 works because Figure 1 and Section 3 explains everything that happens in deep RL*”. To the best of our knowledge, exploration was not a well-known avenue for improving generalization in RL, so we wanted to have a didactic example that illustrates this point as straightforwardly as possible.
If you have suggestions about how we could improve figure 1 without increasing the cognitive load of the reader, we would be happy to incorporate the suggestions.
> I still think the claim "this is the first model-free value-based method to achieve state-of-the-art performance on these benchmarks" is misleading since they were also a model-free value-based meth. I would remove this from the text.
We will change the text to “competitive” or “strong” to be more appropriate. However, we want to emphasize that the QL agent in [1] was not competitive even at the time when [1] first came out. In their setting, the QL agent has the same performance as PLR that is worse than IDAAC which came out before [1]. If the behavior transfers between the two settings, then QL agent should have worse performance than EDE as EDE performs better than IDAAC.
> I disagree that LSTM-SPCNN is specialized to crafter since it can be applied to any other vision-based RL domain. Though I agree that it was designed to target crafter by using patches with attention.
By specialize we mean exactly that it is designed to target Crafter. We will revise it to “designed to target the properties of Crafter” to avoid ambiguity. To be more specific, “SPCNN does not have pooling layers, so the resulting output tensor is of the same height and width as the input image. The flattened tensor that is fed into a linear layer is much larger (64x64x64 instead of 8x8x64 for CNN).” This is quite an unusual architecture design for most applications because it makes the linear layer at least 64 times larger.
> I think some visualization across any of the Deep RL domains you study which showcased the benefit of exploration for generalization would dramatically improve this paper.
In figure 3 of the main text, we have a visualization that shows the benefit of exploration in the game `bigfish` from Procgen. The environment does not support resetting to an arbitrary state due to procedural generation so we are unable to generate the exact frame. | Rebuttal 1:
Rebuttal: We thank all the reviewers for taking the time to provide valuable feedback on our work. Overall, the reviewers found our paper to be clear and easy to follow, the idea of using exploration to improve generalization to be novel and well-motivated, and the experimental results to be thorough and strong. Below we clarify the main contribution of the paper and motivate our choice of baselines since multiple reviewers had questions about these.
> Core contribution of the paper
We'd like to emphasize that **the main message of this paper is the insight that appropriate exploration can help generalization to new environments in RL**. This message is quite intuitive on the tabular MDP we constructed, but it is only relevant if it helps empirically on more complex problems such as Procgen or Crafter, which we use to validate this hypothesis. Our experiments also show that many exploration methods which were designed to improve performance on a single training environment (e.g., NoisyNet, ez-greedy, bootstrapped DQN etc.) are insufficient when applied in such settings. This motivated the design of EDE, a new exploration method that significantly improves performance in new environments. While it is true that EDE has strong empirical performance (especially for a value-based method), we think the algorithmic contribution is secondary to the conceptual message as algorithms can always be improved. Since most existing methods for Procgen do not target exploration explicitly, we think they would benefit from better exploration. We hope our work is a first step towards designing exploration methods that improve generalization in RL and that it will inspire more research on better understanding the relationship between exploration and generalization.
> Choice of baselines (reviewers PttU and dHNU)
Exploration is a longstanding problem in RL and it is infeasible for us to evaluate all possible methods in the literature. We believe that the baselines and ablations we use in the paper are extensive and representative of general-purpose exploration methods in the literature. The estimated time for all Procgen experiments (without considering hyperparameter tuning) is more than **250 GPU days** and covers **5 popular baselines + 8 ablations**. We believe this ensures that the conclusions from our experiments are reliable.
Most of the baselines we chose are general-purpose exploration methods that aim to accelerate learning rather than solve an environment with extremely sparse reward (which is not the setting we consider here and would require additional techniques). They are widely cited and distinct from each other (e.g., bootstrapped DQN is cited 1222 times and NoisyNet is cited 568 times). They follow the principle of "optimism in the face of uncertainty", trading off return maximization with uncertainty. We will refer to them as optimism-based exploration although the exact implementation can vary. Since EDE follows the same principle to explore, we believe these are the most relevant and informative baselines to compare against.
Several reviewers mentioned intrinsic reward methods which we assume refer to methods such as ICM [2] or RND [3]. As we explained in footnote 4, [1] finds that *existing intrinsic reward methods are no better and much less sample-efficient than optimism-based exploration methods such as NoisyNet on Atari*, except for the environments with extremely sparse reward. Note that Procgen and Crafter don't fall into this category since they generally have dense rewards and not-so-long episodes. Thus, we think it is unlikely that intrinsic reward methods would outperform our baselines (e.g., NoisyNet), so we did not include them due to the limited computation budget. However, we believe that a better understanding of how intrinsic rewards influence generalization in RL is an important and interesting research question that would be better addressed in future work.
**References**
[1] On bonus based exploration methods in the arcade learning environment. Taiga et al. 2021.
[2] Curiosity-driven Exploration by Self-supervised Prediction. Pathak et al. 2017.
[3] Exploration by Random Network Distillation. Burda et al. 2018.
**Conclusion**
We thank you again for your feedback and support of the paper. We hope our clarifications have adequately addressed all your concerns but let us know if you have any remaining questions. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper focuses on the importance of exploration in the generalizability in contextual MDPs. The proposed method is built on QR-DQN where the epistemic uncertainty can be separated from aleatoric uncertainty via ensemble. The epistemic uncertainty is then used in a UCB manner to promote exploration. The resulting algorithm is tested in two benchmarks and shown significant improvement over previous baselines.
Strengths: - The paper is overall well-organized and easy to follow.
- The proposed method is novel, well-motivated and empirically strong.
Weaknesses: - The motivating environment introduced in Sec 3 doesn't seem exactly appropriate for the following reasons:
- According to the definition of CMDP, the distribution of MDPs during training and testing should be the same, which is not the case here.
- Since only the starting distribution is different, the value function is the same in both MDPs. Combined with above mentioned different distribution during training and testing, this example seems too obviously engineered towards promoting visiting more states during training.
- Claiming UCB is more effective in exploration compared to $\epsilon$-greedy seems dry. It would be helpful to at least compare the training stage state-action visiting frequency for both methods.
- Same as the motivating example, it would be helpful to show either qualitatively or quantitatively the difference in the extent of exploration among different methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors emphasize separating epistemic uncertainty from the overall uncertainty, which motivates them to build their framework on top of QR-DQN, from my understanding. This decision makes sense theoretically. However, there is a lack of empirical study to show the actual significance. If this separation is unimportant, this method can potentially be applied to a wider range of algorithms.
Minor typos / mistakes:
- No legend in Figure 2(b) and 2\(c\).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and support for the paper! We were glad to hear that you found our paper "well-organized and easy to follow" and our proposed method "novel, well-motivated, and empirically strong". We hope our answers below will address your remaining questions.
>The motivating environment introduced in Sec 3
>
We agree that the claim that exploration can help generalization to new environments (with different initial states or dynamics) may seem obvious *after* we have put them into the context. In fact, we think that it's good that it's so intuitive since the toy example is meant to provide intuition. But of course, the motivating example is not meant to and cannot capture everything that happens in deep RL.
As far as we know, the observation that exploration affects generalization in RL, has not been explicitly discussed in any prior work, so we believe it is still a valuable contribution to explicitly emphasize this, provide intuition using simplified settings and controlled experiments, and validate this hypothesis in more challenging environments like Procgen and Crafter, which our paper does.
> The distribution of MDPs during training and testing should be the same
>
Regarding the distribution of MDPs, the distributions can be easily made the same: the full distribution contains the two starting states with equal probability and the training set contains a single sample from the distribution which is the environment starting from the top left corner. In this case, the population performance would be just the average of the two curves (blue and orange) and UCB would still perform much better since the performance on training MDP (blue) is about the same for both methods. We'd be happy to add a discussion on this point in the appendix.
>Importance of separation of different uncertainties
>
Our paper contains multiple experiments that support the importance of separating the two types of uncertainties. First, in Figure 16c from the appendix, we directly compare aggregated (both aleatoric and epistemic) with only epistemic uncertainties showing that using only the epistemic uncertainty to guide exploration is significantly better. Second, Figure 5 from the main paper shows that the ensemble of DQN (which contains both uncertainties) performs significantly worse.
> Extent of exploration
>
It is difficult to visualize exploration in a procedurally generated environment because there could be a potentially infinite number of distinct states. Moreover, we probably do not want the policy to visit all of these states since that would be extremely inefficient. | null | null | null | null | null | null |
CSOT: Curriculum and Structure-Aware Optimal Transport for Learning with Noisy Labels | Accept (poster) | Summary: The paper studies the problem of noisy label learning. The paper adopts an optimal transport approach to generate pseudo labels for noisy samples. Particularly, the paper builds on the existing method and adds additional regularization terms to enforce the consistency between sample classes and learned representations. The paper also extends the sinkhorn algorithm to solve the proposed OT objective efficiently. Empirically, the proposed method has improved performance over baselines on widely used datasets with various noisy ratios.
Strengths: 1. The paper extends the existing optimal transport approach to include the consistency between the sample representations and predictions/labels. It is a novel objective and a solid idea intuitively.
2. The paper extends the sinkhorn algorithm to solve the proposed new OT objective efficiently.
3. The proposed method has strong empirical performance, especially for high noise ratios.
4. As a pseudo-labeling step, the proposed method can potentially work with other noisy label learning objectives.
Weaknesses: 1. The proposed method is a regularization of the existing optimal transport pseudo-labeling method. The novelty is thus limited.
2. The introduced regularization uses the same weight kappa for the two terms, while the two terms could have quite different behaviors/values.
3. It is not clear to me how does the method perform or should be modified in the case where the class distribution is imbalanced.
4. The OT objective is not directly related to the training objective but serves as a sieving step. It would be great if a more comprehensive training objective can be formalized to include the OT based selection.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Please address the weaknesses 2 3 and 4.
2. What's an explanation for the experimental results that the given method does better in top-1 ACC but worse in top-5 for the imagenet and webvision datasets?
3. The global relationship modeled by OT depends on the batch size. How does the batch size affect the empirical performance? How does the number of total classes in the training set affect the choice of the ideal batch size for the OT step?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and constructive suggestions on our paper. We address your detailed comments below:
> **Q1**. the proposed method is a regularization of the existing OT pseudo-labeling (PL) method. The novelty is thus limited.
**A1**. Firstly, we would like to justify that our CSOT formulation not only involves two local coherent regularization terms but also curriculum constraints. Notably, **CSOT introduces a novel OT formulation, necessitating a fresh solver. Thus, we devise an lightspeed computational method**.
Secondly, we emphasize that directly applying the original OT to certain PL tasks may result in sub-optimal performance. Consequently, it becomes necessary for researchers to investigate more adaptive OT formulations for specific problems [9][54], including off-the-shelf variants like unbalanced OT and partial OT, to address the specific challenges of the problem. In this context, we propose a novel CSOT formulation tailored for the denoising and relabeling task.
Thirdly, we highlight that our work serves as a valuable example of developing a customized OT formulation and a corresponding solver. By showcasing our method's adaptability and efficacy, we aim to contribute to the wider application and exploration of OT across various domains.
> **Q2**. The introduced regularization uses the same weight kappa for the two terms, while the two terms could have quite different behaviors/values.
**A2**. We agree with the reviewer that there may exist better $\kappa_1$ and $\kappa_2$ for $\Omega^P$ and $\Omega^L$ accordingly in Eq.(3). However, practically, it is rather cumbersome to tune separate hyperparameters for different datasets or noise rates, and we set a unified $\kappa$ for simplicity.
Moreover, we find the $\kappa_1$ and $\kappa_2$ are not very sensitive as shown in the Table G4 (global response pdf). The table also reveals that in scenarios with high noise rates like CIFAR-100 sym-0.8, prioritizing the prediction-level term with a higher weight is more advantageous due to label unreliability.
> **Q3**. How does the method perform or should be modified in the case where the class distribution is imbalanced?
**A3**. Thank you for raising the concern. Following existing work [18][54], like SwAV, we adopt a uniform class marginal vector for simplicity. While we do not address the class imbalance in this paper within the scope of LNL, we believe our approach can be extended for this purpose in future work. Here's a possible solution: we can introduce an outer minimizer to optimize an appropriate imbalanced class distribution $\beta$:
$\min_\beta \min_{Q\in\Pi^c(\alpha,\beta)}
\left<C, Q\right>
+\kappa\Omega(Q)
+\varepsilon\left<Q, \log Q\right>$
s.t. $\beta \in $ {$ \beta\in\mathbb{R}_{+}^{C} | \sum_i^C \beta_i=m $}.
> **Q4**. The OT objective is not directly related to the training objective but serves as a sieving step. It would be great if a more comprehensive training objective can be formalized to include the OT-based selection.
**A4**. Thank you for your constructive suggestion.
Firstly, we claim that an additional OT-based training objective isn't essential. To explain this, our CSOT-based Pseudo-Labeling (PL) aims at selecting top-confident samples and relabeling them with reliable labels. Following existing work [18][54], reliable labels from OT-based PL seamlessly plug into diverse supervised or semi-supervised losses. Thus, CSOT-based denoising and relabeling, followed by training with standard objectives, offers flexibility.
Secondly, our offline sieving optimization, unsuitable for model training. To be more specific, following DivideMix and NCE, our learning scheme is based on semi-supervised learning as shown in Algorithm S3 (Appendix A.3). And the sieving step based on CSOT has to be offline to get the labeled and unlabeled dataset for the follow-up semi-supervised learning step. Therefore, it is able to learn from the selected and relabeled samples based on off-the-shelf training objectives instead of the offline sieving objective.
> **Q5**. The explanation for the experimental results that the given method does better in top-1 ACC but worse in top-5 for the Imagenet and WebVision datasets.
**A5**. Thank you for raising the concern. There are two potential reasons behind this.
Firstly, we reckon that the class-imbalance scenario in the Webvision dataset can not be fully well-addressed in our work. As mentioned in Section 7, class-imbalance cases are specifically not considered in our work and hence we simply adopt uniform distribution as the categories distribution. The uniform distribution helps CSOT allocate equal focus to long-tailed categories, favoring top-1 accuracy. However, CSOT may mislabel some major class samples into unrelated categories, hampering top-5 accuracy. Nevertheless, we believe our CSOT can be extended to class-imbalanced scenarios in future work.
Secondly, the existing work NCE employs a co-training scheme with two models, which can well avoid memorizing the noise. Conversely, we solely train one model, which may lead to incorrect labels being memorized.
> **Q6**. How does the batch size affect the empirical performance? How does the number of total classes in the training set affect the choice of the ideal batch size for the OT step?
**A6**. Thank you for raising the concern. Firstly, we would like to claim that to fully capture the local and global structure of the data, more samples per class is better. However, a larger batch size enlarges the OT matrix, potentially affecting computational efficiency. To balance accuracy and efficiency, we guarantee each class has at least 20 samples and set a batch size of 2000 for offline denoising and relabeling. Additionally, we employ an accumulated memory bank to store each mini-batch sample to fill 2000, which avoids the GPU memory limit exceeded problem.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply. I keep my score unchanged. | Summary: This paper introduces a novel formulation of Optimal Transport (OT), named Curriculum and Structure-Aware Optimal Transport, for generating pseudo labels by considering both inter- and intra-distribution structures of samples. Moreover, to efficiently estimate the distribution's structure, the authors adopt a curriculum paradigm to progressively train the proposed denoising and relabeling allocator. Additionally, they present a computation method for the proposed CSOT that ensures faster processing speeds, reducing computational overhead. Experimentally, this paper achieved SOTA performance on various benchmarks.
Strengths: 1.Estimating the intra- and inter-structure coherence of samples is a convincing and reliable method for improving relabeling accuracy. The proposed prediction-level and label-level consistency constraints seem also interesting and plausible.
2.The combination of the proposed OT and curriculum learning for solving INL is also smooth and makes sense.
Weaknesses: 1.The effectiveness for alignment of global and local structures between samples and classes is not fully convincing. The ablation results for OT in Table 3 seem weak. The comparison between row (a) with 78.07 and row (b) with 78.65 suggests that the performance improvement brought by the proposed prediction-level and label-level constraints is limited. Moreover, the introduction of two additional constraints adds complexity to the optimization. Similarly, row (e), CSOT w/o Ω^{L}, achieves the best performance, indicating that the benefits brought by the prediction-level and label-level constraints are unstable.
2.As the part that readers are most concerned about, the section 4.3, the loss function needs to be able to reflect the integrity of the method and the specific combination with its own innovation points. I can’t see the innovation of this article in the loss function here, and each loss term is an existing work. The work in this paper seems to be only used to build a dataset $\mathcal{D}_{\text{clean}}$ and $\mathcal{D}_{\text{corrupted}}$ for training? This structure and method of writing can greatly weaken the contribution of this paper. Besides, I would like to suggest the authors provide an overall alghrothim to show the whole training process, where the proposed methods would have been used during each training epoch and the training objectives are not the key point in this paper.
3.Considering the complexity of the proposed algorithm, and its marginal improvement over previous methods on two real datasets in Table 2, especially compared to NCE. The effectiveness of this work is questionable. Additional experiments are suggested, especially on Clothing1M. Besides, some related works[1] should be discussed which are published recently.
[1] OT-Filter: An Optimal Transport Filter for Learning with Noisy Labels (CVPR 2023)
4.The results of row (g) in Table 3 are not sufficient to show that the performance improvement is brought about by the method in this paper. For example, we need a more detailed ablation study to explain the role of NCE loss and the role of CSOT.
5.The structure of the article is confusing, which weakens the contribution of this article. Secondly, the introduction of some tool concepts is quite abrupt. For example, the proposal of Eq. (3) and (6) gives people a new form of OT formulation and will give specific solutions later. This expectation is affected by the entropy regularization in the following text, whose introduction is abrupt. This results in the final Eq. (16) which does not look significantly different from the original sinkhorn algorithm. It is recommended to introduce sinkhorn from the beginning and to emphasize that there are new constraints based on it.
6.The characters in the figures in the experimental part are too small, which affects reading. At the same time, why are there two figures, and their labels are figure 2? Moreover, Fig. 4 is quoted in the description of the text, but there is no figure 4 in fact.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1.For more precision, I would like to suggest providing a more detailed description of Equation (6). What does the left constraint in Equation (6) signify? Why should $Q\mathbb{1}_{C}$ not exceed $\frac{1}{B}\mathbb{1}_{B}$? Furthermore, it seems that $\frac{m}{c}\mathbb{1}_{C}$ is not a simplex. Could you please explain the rationale behind this relationship and provide additional descriptions or citations to support it? Additionally, the introduction of curriculum into SOT appears to be too direct and lacks smoothness.
2.The identification of selected samples in Equation (7) is confusing. What does "topK(W,⌊mB⌋)" mean? Based on my understanding, it implies that a sample pair (x_i, y_i) will be considered clean if it belongs to the top k convincing samples in the current training batch. I would suggest adding a description for the round-down symbol ⌊ and ⌋ to clarify its purpose.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors describe its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback on our paper. Here we address your detailed comments below:
> **Q1(1)**. The comparison between classical OT (row(a)) and Structure-aware OT (row(b)) in Table 3 suggests that the performance improvement brought by the proposed two regularization terms is limited. Row(e), CSOT w/o Ω^{L}, achieves the best performance, indicating that the benefits brought by two regularization terms are unstable.
**A1(1)**. Firstly, **the unsurprising improvement of Structure-aware OT (SOT) results from the inherent characteristics of the Learning with Noisy Labels (LNL) task**, where the feature space and noisy labels exhibit ambiguity in the early training stage, leading to biased label generation.
**To ensure reliable pseudo-labels for LNL task, we introduce curriculum constraints to SOT, i.e. our CSOT, where structure-aware regularization terms and curriculum constraints mutually enhance each other.**
As shown in Table3, our CSOT significantly boosts the classical OT (row(a)) performance from 78.07 to 81.85, achieving the best result.
As shown in Figure S6 (Appendix B.5), the CSOT design ensures the high accuracy of clean labels during the early training stage, which helps CSOT build a robust allocator in a high noise rate scenario.
> **Q1(2)**. The introduction of two additional constraints adds complexity to the optimization.
**A1(2)**. Our proposed efficient solver ensures minimal computational burden. Table 5 demonstrates our method's faster total training time compared to DivideMix and NCE.
> **Q2(1)**. The loss function needs to be able to reflect the integrity of the method and the specific combination with its own innovation points.
**A2(1)**. Firstly, we respectively disagree with this comment. We believe the loss function is not a necessary part to reflect the integrity and novelty of the method. In this paper, we do design a new objective function for OT, i.e. CSOT, incorporating of local coherent regularization terms and curriculum constraints. **Our proposed CSOT and the new solver instead of the loss function are the key innonvation of this paper**. Notably, our CSOT, aiming at identifying clean labels and correcting corrupted labels, is easy to be compatible with other Learning with Noisy Labels (LNL) losses.
Secondly, we would like to claim that **modeling the sample selection and label correction [37][53] are also common directions for LNL as shown in the [R1], which clearly categorizes recent methods into four directions including "Sample Selection" and "Robust Loss Design"**.
[R1] [TNNLS 2022] Learning from noisy labels with deep neural networks: A survey
> **Q2(2)**. Suggest the authors provide an overall algorithm to show the whole training process.
**A2(2)**. We provided the overall algorithm in Algorithm S3 (Appendix A.3).
> **Q3(1)**. Additional experiments on Clothing1M are suggested.
**A3(1)**. We conducted the experiment on the Clothing1M dataset in Table S7 (Appendix B.5).
> **Q3(2)**. The related work [CVPR 2023] OT-Filter should be discussed.
**A3(2)**. Thanks for your advice. We discuss this work in the following paragraph, and they will be involved in our final version.
Firstly, we clarify that OT-Filter is a concurrent study, publicly accessible post the NeurIPS 2023 submission deadline.
Secondly, our proposed CSOT methodology differs significantly from OT-Filter. OT-Filter involves standard OT-based pseudo-labeling with extra sparsity regularization, whereas CSOT concurrently considers both inter- and intra-distribution structures of samples to construct a robust curriculum allocator for denoising and relabeling.
Thirdly, CSOT's superiority over OT-Filter on CIFAR-10/100 and Clothing 1M dataset is demonstrated in Table G1, G2, G3 (global response pdf).
> **Q4**. Need a more detailed ablation study to explain the role of NCE loss and the role of CSOT.
**A4**. Firstly, **we did conduct detailed ablation study in Figure 2 (page 9)** with NCE and CSOT, comparing the performance of clean label identification and corrupted label correction. We will enlarge the font size in Figure 2 to make it more clear. **Despite using a similar learning objective as NCE, CSOT excels in sieving and relabeling, confirming our CSOT's effectiveness.**
Secondly, our CSOT serving as a sample selection and pseudo-labeling step, akin to NCE. Thus, we adopt NCE's existing semi-supervised learning objectives, which is not a specially tailored LNL loss.
> **Q5**. It is recommended to introduce sinkhorn from the beginning and to emphasize that there are new constraints based on it.
**A5**. Thanks for your advice. We will move Line 196-200 (Section 5) to Section 3 Preliminaries to introduce Sinkhorn algorithm from the beginning in our final version.
> **Q6**. Figure readability including mislabeled index and font size.
**A6**. Thank you for advice. Figure 2 on page 9 is wrongly identified as Figure 4. We will revise these in our final version.
> **Q7**. More detailed description of the curriculum constraint in Eq.(6). Why should $Q 1_C$ not exceed $\frac{1}{B} 1_B$? Furthermore, it seems that $\frac{m}{C} 1_{C}$ is not a simplex.
**A7**. Firstly, the equality constraint $\alpha=\frac{1}{B}1_B$ in Eq.(2) signifies equal assignment budget for each sample, demanding uniform mapping intensity to the class centroid. **For the purpose of partial assignment, our CSOT relaxing this equality as $Q1_C\leq\frac{1}{B}1_B$.**
Secondly, **while maintaining column equality ($Q^T1_B=\frac{m}{C}1_C$), $m\in[0,1]$ regulates total coupling sum to control the curriculum budget**, as in Line 123-124. We visually demonstrate CSOT's coupling matrix in Figure S4 (Appendix B.2), revealing the curriculum factor $m$ effect.
> **Q8**. Suggest adding a description for the symbol ⌊·⌋ to clarify top-k selection in Eq.(7).
**A8**. Thanks for your advice. We will revise this in our final version.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks for the authors' reply. I increase my score to 5.
---
Rebuttal 2:
Comment: Thanks for your valuable comments and suggestions. We greatly appreciate your time and effort in reviewing our work. We have carefully considered each of your concerns and have made the necessary revisions to address them.
We sincerely hope that our responses have adequately addressed the concerns raised in your review. Please feel free to let us know if you have any further questions. We are dedicated to further clarifying and addressing any remaining issues to the best of our ability. | Summary: This paper introduces CSOT, an approach to address the challenge of noisy labels in machine learning models. CSOT incorporates optimal transport formulation to assign reliable labels during training, considering the structure of the sample distribution. The authors also propose an efficient computational method for solving CSOT. Experimental results demonstrate the superior performance of CSOT compared to existing methods for learning with noisy labels.
Strengths: - The paper is strongly motivated by theoretical analysis, particularly optimal transport analysis.
- The writing style is clear and easy to follow.
- CSOT exhibits superior performance when compared to previous algorithms.
Weaknesses: - The paper lacks a comparison with a baseline algorithm called UNICON [1], which has shown good performance in highly noisy scenarios (e.g., 0.9 noisy ratio). It would be valuable for the authors to include a performance comparison with UNICON.
- The authors do not analyze the case of instance-wise noisy labels, which is a prevalent type of noisy label model. Including an analysis of this case would be beneficial.
- The paper does not investigate the sensitivity of hyperparameters, which are required to run the algorithm. It would be valuable for the authors to perform a hyperparameter sensitivity analysis.
- To enable a comprehensive comparison, the authors should report both the best and last performances of the model, as models trained on noisy labels tend to memorize the noisy labels.
[1] UNICON: Combating Label Noise Through Uniform Selection and Contrastive Learning
Minor)
The legend size in Figure 2 is too small to read.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: -
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of this paper are summarized in the "Question" and "Weakness" sections
------
(Raise score from 5 to 6 after rebuttal)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments which helped us improve the quality of our work. In the following, we have provided a point-by-point response to the comments.
> **Q1**. Lack of a comparison with a baseline algorithm called UNICON.
**A1**. Thanks for your constructive suggestion. We will add UNICON as one of the baseline methods in our final version. And here is a simplified comparison table for UNICON and our CSOT in CIFAR-10/100 dataset, which shows that **our CSOT still outperforms UNICON on CIFAR 100 under high noise ratios**.
Table 1. Comparison between UNICON and CSOT in test accuracy (\%) on CIFAR-10.
| |sym-20\%|sym-50\%|sym-80\%|sym-90\%|asym-40\%|
| ---- | ---- | ---- | ---- | ---- | ---- |
|UNICON| 96.0 | 95.6 | 93.9 | **90.8** | 94.1 |
|CSOT (Ours) | **96.6** | **96.2** | **94.4** | 90.7 | **95.5** |
Table 2. Comparison between UNICON and CSOT in test accuracy (\%) on CIFAR-100.
| |sym-20\%|sym-50\%|sym-80\%|sym-90\%|
| ---- | ---- | ---- | ---- | ---- |
|UNICON| 78.9 | 77.6 | 63.9 | 44.8 |
|CSOT (Ours) | **80.5** | **77.9** | **67.8** | **50.5** |
> **Q2**. Supplement an analysis of the case of instance-wise noisy labels.
**A2**. Thanks for your constructive suggestion. We validate the effectiveness of our proposed method in instance-wise noise under the noise rate of 20\%, 40\%, 60\% in the following table.
Table 3. Comparison of instance-wise noise among DivideMix, NCE, and CSOT in test accuracy (\%) on CIFAR-10.
| noise rate |20\%|40\%|60\%|
| ---- | ---- | ---- | ---- |
|DivideMix| 92.26 | 93.86 | 54.34 |
|NCE| 96.00 | 95.24 | 75.58 |
|CSOT (Ours) | **96.21** | **95.82** | **76.65** |
Table 4. Comparison of instance-wise noise among DivideMix, NCE, and CSOT in test accuracy (\%) on CIFAR-100.
| noise rate |20\%|40\%|60\%|
| ---- | ---- | ---- | ---- |
|DivideMix| 77.52 | 73.58 | 38.72 |
|NCE| **80.40** | 74.92 | 70.34 |
|CSOT (Ours) | 79.22 | **76.55** | **72.71** |
> **Q3**. Supplement the sensitivity of hyperparameters.
**A3**. We analyzed the sensitivity of hyperparameters $\varepsilon$ and $\kappa$ in our CSOT in supplementary materials, please refer to Figure S7 and Figure S8 (Appendix B.6).
> **Q4**. Supplement both the best and last performances of the model.
**A4**. We have reported both the best and last performances for CIFAR-10/100 dataset in Table 1. Following the existing method, we only report the last performance on the WebVision and Clothing1M dataset.
> **Q5**. The legend size in Figure 2 is too small to read.
**A5**. Thank you for pointing this out. We will adjust it to a more readable size in our final version.
---
Rebuttal Comment 1.1:
Title: Official Comments and decision.
Comment: Thank you for your detail responses. I have read the responses from the authors to the questions I raised (including other reviewers' comments and responses). I would like to raise the score from 5 to 6. | Summary: This paper proposes a novel optimal transport formulation, called Curriculum and Structure-aware Optimal Transport (CSOT) for learning with noisy labels. CSOT considers both the inter- and intra-distribution structure of the samples to construct a robust denoising and relabeling allocator. It’s worthing noting that Notably, CSOT is a new OT formulation with a nonconvex objective function and curriculum constraints. The authors developed a lightspeed computational method that involves a scaling iteration within a generalized conditional gradient framework to solve CSOT efficiently.
Strengths: 1. Novel Approach: The paper proposes a novel optimal transport (OT) formulation called Curriculum and Structure-aware Optimal Transport (CSOT) to address the challenge of learning with noisy labels. I believe this paper introduces a new perspective and potentially brings fresh insights to the field.
2. Consideration of Global and Local Structure: Unlike current approaches that evaluate each sample independently, CSOT concurrently considers the inter- and intra-distribution structure of the samples. This consideration of both global and local structure helps construct a more robust denoising and relabeling allocator, potentially leading to improved performance.
3. Incremental Assignment of Reliable Labels: CSOT incrementally assigns reliable labels to a fraction of the samples with the highest confidence during the training process. This approach ensures that the assigned labels have both global discriminability and local coherence, which could contribute to better generalization and reduced overfitting.
4. This paper provides a very detailed derivation for the lightspeed computational method.
Weaknesses: Researchers or practitioners interested in using CSOT may need to invest additional effort in adapting or developing specialized solvers.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How does CSOT incorporate the global and local structure of the sample distribution? Could you provide more details on the methodology used to capture this information?
2. Can you explain in more detail the nonconvex objective function of CSOT? How does the nonconvexity affect the optimization process, and how does the proposed method handle this challenge?
3. What is the iteration number required by the generalized conditional gradient?
4. Are there any specific scenarios or types of noisy labels where CSOT may not perform as well?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: CSOT is described as having a nonconvex objective function. Nonconvex optimization problems can be challenging to solve, and they may have multiple local optima, making it difficult to guarantee finding the global optimum. This could potentially impact the reliability and efficiency of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for our paper's positive feedback and constructive suggestions. Here are our responses to the reviewer's comments.
> **Q1**. Researchers or practitioners interested in using CSOT may need to invest additional effort in adapting or developing specialized solvers.
**A1**. Thank you for raising the concern.
Firstly, We would like to justify that **our proposed solver is available for arbitrary differentiable regularization term $\Omega$, marginal constraints vectors $\alpha$, and $\beta$** in Eq.(16), as we stated in Algorithm 2 Input row (Line 1).
Secondly, **our code provides a versatile solver** (Algorithm 2) API that can be used with any user-defined $\Omega$, $\nabla\Omega$, $\alpha$, and $\beta$. This feature simplifies the process for follow-up researchers to conduct their experiments, especially for those who wish to validate the effectiveness of their own customized regularization terms $\Omega$. As a result, researchers do not have to spend time adapting or developing specialized solvers, thus enhancing the efficiency of their work.
Thirdly, we highlight that **our work provides a valuable example of developing a customized OT formulation and a corresponding solver**. By showcasing our method's adaptability and efficacy, we aim to contribute to the wider application and exploration of OT across various domains.
> **Q2**. Provide more details about incorporating the global and local structure of the sample distribution in CSOT.
**A2**. Firstly, the local structure, i.e. intra-distribution coherence among samples, is preserved by two local coherent regularized terms defined in Equations (4) and (5). Technically speaking, to formulate the coherent regularized terms, we construct the correlation among $i$-th sample, $j$-th sample, and $k$-th class centroid by element-wise multiplication. As shown in Figure 1 Top, classical OT tends to mismatch two nearby samples to two far-away class centroids when the decision boundary is not accurate enough. To mitigate this, our SOT generates local consensus assignments for each sample by preserving prediction-level and label-level consistency. Notably, for vague samples located near the ambiguous decision boundary, SOT rectifies their assignments based on the neighborhood majority consistency.
Secondly, the global structure, i.e. inter-distribution discriminability between samples and categories, is preserved by curriculum constraints $\Pi^c(\alpha,\beta)$ defined in Eq.(11). This property is inherited from the OT constraints $\Pi(\alpha,\beta)$ defined in Section 3 (Line 122), which enforces the marginal distributions of the coupling matrix equal to given samples and categories distribution $\alpha$ and $\beta$. As shown in Figure S3 (Appendix B.1), OT-based Pseudo-Labeling (PL) tends to generate more discriminative labels than prediction-based PL. To explain this, OT-based PL optimizes the mapping problem by considering the inter-distribution matching of samples and categories, rather than the prediction-based PL assigning labels solely in a per-class manner.
Thirdly, our proposed curriculum constraints incorporate the global and local structure of the sample distribution, prioritizing samples with better global discriminability and local coherence properties for label assignment, thereby enabling a robust curriculum allocator.
> **Q3**. How does the nonconvexity affect the optimization process, and how does the proposed method handle this challenge?
**A3**. The nonconvexity would bring some problems including local minima, rather slow convergence speed than convex case. However, solving nonconvex objectives with the generalized conditional gradient (GCG) algorithm has been supported by strong convergence analysis [R1][R2]. And we find our CSOT converges fast as shown in Figure S5 (Appendix B.4).
Moreover, some OT-like nonconvex problems, such as Gromov-Wasserstein problem, are also solved by GCG algorithm and provide convergence analysis [10][19].
[R1] Bredies K, Lorenz D, Maass P. Equivalence of a generalized conditional gradient method and the method of surrogate functionals[M]. Bremen, Germany: Zentrum für Technomathematik, 2005.
[R2] Beck A. First-order methods in optimization[M]. Society for Industrial and Applied Mathematics, 2017.
> **Q4**. Specify the iteration number required by the generalized conditional gradient.
**A4**. As we specified in Section 6.1 (Line 238-239), the number of outer loops is set to 10, and the number for inner scaling iteration is set to 100.
> **Q5**. Specify the potential scenarios or types of noisy labels where CSOT may not perform as well.
**A5**. As we stated in Section 7, extreme class-imbalance cases are not considered in this paper. In highly imbalanced scenario, the generated pseudo-labels would be biased due to uniformed class distribution vector $\beta$. However, we believe that our approach can be further extended for this purpose in the future work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work:
* proposes novel objective which is a solid idea (Reviewer 7s92)
* proposes a convincing, reliable, interesting and plausible method (Reviewer Zh5c)
* is strongly motivated by theoretical analysis (Reviewer ZdDn)
* provides a very detailed derivation for the lightspeed computational method (Reviewer gpRK)
* exhibits superior and strong performance (Reviewer ZdDn, Reviewer 7s92)
* introduces a new perspective and potentially brings fresh insights to the field (Reviewer gpRK)
We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work:
* proposes novel objective which is a solid idea (Reviewer 7s92)
* proposes a convincing, reliable, interesting and plausible method (Reviewer Zh5c)
* is strongly motivated by theoretical analysis (Reviewer ZdDn)
* provides a very detailed derivation for the lightspeed computational method (Reviewer gpRK)
* exhibits superior and strong performance (Reviewer ZdDn, Reviewer 7s92)
* introduces a new perspective and potentially brings fresh insights to the field (Reviewer gpRK)
In the subsequent sections, we aim to address the concerns and questions you raised, offering a comprehensive item-by-item response to each of your comments.
We have provided some additional experiments results as reviewers suggest. Due to space limitations, we've displayed the results table **in the global response PDF** for Reviewer Zh5c Q3(2) and Reviewer 7s92 Q2.
Pdf: /pdf/254fd0ad42c2b97c8971ea5f6135a586aa9a4ab7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a new noisy label learning approach based on Optimal Transport (OT) and Pseudo-Labeling (PL). Specifically, the authors extent OT-based PL with the consideration of the intrinsic coherence structure of sample distribution. Consequently, this paper proposes a novel optimal transport formulation, namely Curriculum and Structure-aware Optimal Transport (CSOT), which constructs a robust denoising and relabeling allocator that mitigates error accumulation. Experiments on both controlled and real noisy label datasets show the effectiveness of the proposed method.
Strengths: 1. The paper proposes a method named Curriculum and Structure-aware Optimal Transport (CSOT) to address the problem of noisy label learning, and the application of OT-based pseudo-labeling in tackling noisy label learning problem has not been thoroughly investigated.
2. The experimental results on different datasets in this paper validate the effectiveness of the proposed method. Additionally, several ablation experiments are conducted to demonstrate the effectiveness of each module in the method.
Weaknesses: 1. In terms of methodological novelty, OT-based PL has been previously applied to other problems, and this paper only applies it to the specific problem namely noisy label learning rather than introducing it for the first time. Additionally, employing curriculum learning to address the issue of pseudo-labeling is a common strategy in the field of weakly supervised learning.
2. The utilization of SOT is a key contribution of this paper. However, the current motivation behind this aspect, as presented in Figure 1, is not sufficiently clear. The authors are encouraged to provide additional descriptions in this section to enhance the clarity and understanding of the motivation.
3. Since the differences between the proposed method and the comparison methods in several cases is too small, it is hard to provide a clear comparison without the present of standard deviations. Additionally, the backbone and other parameter settings of the SOTA methods are not clearly listed, thus, further evidence is needed to establish the fairness of the comparisons.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments which helped us improve the quality of our work. In the following, we have provided a point-by-point response to the comments.
> **Q1**. OT-based Pseudo-Labeling (PL) is not proposed for the first time and curriculum learning to address the issue of pseudo-labeling is a common strategy.
**A1**. Firstly, we emphasize that directly applying the original OT to certain PL tasks may result in sub-optimal performance. Consequently, it **becomes necessary for researchers to investigate more adaptive OT formulations for specific problems** [9][54], including off-the-shelf variants like unbalanced OT and partial OT, to address the specific challenges of the problem. To this end, we propose a novel CSOT formulation tailored for the denoising and relabeling task. Notably, our CSOT formulation innovatively incorporates two local coherent regularization terms and curriculum constraints, enabling the incremental generation of reliable pseudo-labels for the Learning with Noisy Labels task.
Secondly, we would like to claim that **our work is the first to propose a curriculum scheme that fully considers the inter- and intra-distribution structure of the samples based on OT**, to the best of our knowledge. Also, introducing curriculum scheme to OT requires a new solver and **we innovatively propose a lightspeed computational method**. This approach stands in stark contrast to existing curriculum-based PL methods, setting our work apart in terms of its novelty and uniqueness.
Thirdly, we highlight that our work **provides a valuable example of developing a customized OT formulation and a corresponding solver**. By showcasing our method's adaptability and efficacy, we aim to contribute to the wider application and exploration of OT across various domains.
> **Q2**. Provide additional descriptions about the Structure-aware OT in Figure.
**A2**. Thanks for your constructive suggestion. We will add the following descriptions to Figure 1 in our final version.
"(Top) Comparison between classical OT and our proposed Structure-aware OT. Classical OT tends to mismatch two nearby samples to two far-away class centroids when the decision boundary is not accurate enough. To mitigate this, our SOT generates local consensus assignments for each sample by preserving prediction-level and label-level consistency. Notably, for vague samples located near the ambiguous decision boundary, SOT rectifies their assignments based on the neighborhood majority consistency."
Moreover, to further show the differences among prediction-, OT-, and Structure-aware OT-Based PL, we provided a more intuitive illustration in Figure S3 (Appendix B.1).
> **Q3**. Lack of standard deviations and experimental settings of the SOTA methods.
**A3**. Thank you for raising the concern. For the standard deviations, we did provide the results of standard deviations on CIFAR10/100 in Table 1. Here we provide standard deviations for the Webvision dataset as follows:
Table 1. Comparison between NCE and CSOT in top-1/5 test accuracy (\%) on the
Webvision and ImageNet ILSVRC12 validation sets.
| |Webvision-top-1|Webvision-top-5|ILSVRC12-top-1|ILSVRC12-top-5|
| ---- | ---- | ---- | ---- | ---- |
|CSOT (Ours)| 79.67±0.14 | 91.95±0.21 | 76.64±0.16 | 91.67±0.18 |
For the experimental settings, we inherit the same backbone, optimizer parameters from the SOTA works NCE and DivideMix. More details are provided in Appendix A.1, please refer to our supplementary materials.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: Thank you for your responses. Your explanation has clarified some of the previous concerns. However, I still believe that this paper offers limited contributions to the current field of learning with label noise. Therefore, I will maintain my current score. | null | null | null | null | null | null |
Learning Causal Models under Independent Changes | Accept (poster) | Summary: The author propose a score-based method for causal discovery from multi-environment data. Identifiability of the causal model and the environment partition were shown for the proposed score function. The proposed algorithm was evaluated on synthetic and multiple real data sets.
Strengths: 1. The toy example provides a nice illustration of the proposed idea.
2. Using a measure of the complexity of GP models as the score function for causal models is an interesting idea.
3. There are sufficient evaluations on synthetic and real data sets.
Weaknesses: 1. The score function (4) is written in terms of the true conditional probability, while the empirical score function is not defined. In the algorithm, it was not mentioned how to estimate the score function using the data.
2. There are missing explanations and plenty of typos in the main results.
Missing explanations:
(1) The role of the the penalty term $R(X_{S})$ in $(4)$ should be explained even it is from a previous work.
(2) The term $L(h)$ in $L(X|\mathcal{H})$ is not defined in Section 2.2.
(3) How is $\mathcal{H}_{k}$ constructed in the algorithm? How large is the function class?
Typos:
(1) Whether the term $-log P(X_{i}|X_{S})$ in (4) should be $-log P(X_{i}|f(X_{S}))$ or something else?
(2) LHS of (5) should be $X_{j}^{(c)}$.
(3) $\lambda$ in Theorem 3.2. is not defined in the score function.
(4) "Theorem3.1 assumes Assumptions 1-4, Theorem 3.2 assumes Assumptions 1-5, and Theorem 3.3 additionally assumes Assumptions 6."
(5) $X_{S} \subseteq X\setminus X_{i}$ in (4).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Overall, I think the idea is interesting and promising, but the authors should check for typos carefully.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your time and comments, please find our responses below.
**Score function**
> The score function (4) is written in terms of the true conditional probability, while the empirical score function is not defined. In the algorithm, it was not mentioned how to estimate the score function using the data. [...]
> How is $H_\kappa$ constructed in the algorithm? How large is the function class?
In practice, we fit a GP regression from the conditioning set to the target variable. Hence, we do not explicitly construct the hypothesis space $H_\kappa$. As we noted in Section 3.1, since $\kappa$ is the RBF kernel, $H_\kappa$ is dense in the set of continuous functions. However, the beauty of GP models is that despite this it is simple to learn the optimal model from any finite sample.
As it is fairly standard in GP regression we omitted details on how to estimate the score function, but will make this more clear.
**Missing explanations**
> (1) The role of the the penalty term $R(X_S)$ in (4) should be explained even it is from a previous work.
The penalty is known as the regret term. For the regret of GP models, Kakade et al. (2005) give worst-case bounds [1]. We will add an explanation to our manuscript, thank you for pointing this out.
> The term L(h) in L(X|H) is not defined in Section 2.2.
That is a good point, $L(h)$ refers to a length function or code for the model $h$, and its definition depends on the given model class. In section 3.2, it is given by $||f||_{\kappa}^{2}$.
> Overall, I think the idea is interesting and promising, but the authors should check for typos carefully.
Thank you for listing out typos, we will fix these in our revision.
Please let us know whether we could address your concerns and whether you have further questions.
**References**
[1] Kakade, Sham M., Matthias W. Seeger and Dean Phillips Foster. "Worst-Case Bounds for Gaussian Process Models." *NIPS (2005)*.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttle
Comment: Thanks for the clarifications of the GP model.
Again, I think the main problem of the paper is the writing. Especially the missing details. I vote for acceptance and adjust my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your feedback, we will include the missing details that you pointed out in our manuscript. | Summary: The authors present a novel approach for causal discovery that goes beyond partially directed graphs by utilizing Gaussian Process (GP) models. The proposed method aims to identify
the correct causal model under certain conditions. The key idea is to leverage algorithmic independence to achieve a concise and lossless description of the data, particularly in the presence of multiple contexts. Unlike existing approaches such as Sparse Mechanism Shift (SMS), which require conditional independence tests, the proposed method employs a scoring criterion based on GP models.
The authors established the theoretical soundness of this approach by providing a clear and concise explanation of the underlying principles. Additionally, they demonstrate the effectiveness of the method through several examples, which serve to evaluate its performance and help to illustrate the practical implications and potential benefits of the proposed method. Overall, the proposed method offers a new perspective on causal discovery by utilizing Gaussian Process models and algorithmic independence.
Strengths: LINC (Learning causal models under Independent Changes) proposed in this paper seems original utilizing GP and its complexity measure for discovering a mixture of mechanisms. Non-iid data can be typically challenging but in this paper’s setting, multi-context is a key to identifying the underlying causal structure. The overall flow of the paper is smooth, and it appropriately makes necessary assumptions.
Weaknesses: Not a major weakness but minor weaknesses (more like comments)
Readability: It would have been beneficial to include explanatory information on prerequisite knowledge to enhance the accessibility of the paper. In particular, “algorithmic independence” seems very crucial, but it is just referenced without properly articulating its definition or the difference to conditional independence.
Assumptions numbering is problematic. It seems that the authors removed one of the assumptions at the very last stage of writing. E.g. Assumption 5 is mentioned in Theorem 3.1, which is in the previous section. Also for Theorem 3.2 refers Assumption 6. Theorem 3.3 calls for Assumption 7 which does not exist.
It is less clear whether C is part of data or not.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In Theorem 3.1, lambda seems a bit abrupt. Is this the hyperparameter for RBF?
Instead of presenting Theorem 3.2 before 3.3, you may introduce 3.3 and just make 3.2 corollary of 3.3 where |C| goes infinity. (or does 3.3 make an additional assumption (e.g. fixed partition size) which is not required for 3.2?)
I am not entirely sure whether the expression in Assumption 5 is valid. In the following paragraph the authors mentioned that partitions are random variables. Does it mean that C is a random variable where \Pi is just a deterministic function wrapping C?
In Figure 1(b), Lines 40-41 mention that X and Z are changed. But the Figure 1(a) shows the change wrt context is the function of Y not X nor Z. What am I missing?
In Sec 3.5 I couldn’t parse “Given the limited number of distinct context distributions … “ I couldn’t see any P( C ) defined clearly.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: It seems that the critical part of the paper is Section 3.5 where the authors actually operationalize theoretical results in the previous sections. But I felt that the authors somehow hide some issues? E.g., finding partitions is exhaustive (or equivalently, exponential in the number of contexts) (despite line 227 mentions that the time complexity is in supplementary material…)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your comments, which we address in the following.
**Limitations**
> It seems that the critical part of the paper is Section 3.5 where the authors actually operationalize theoretical results in the previous sections. But I felt that the authors somehow hide some issues? E.g., finding partitions is exhaustive (or equivalently, exponential in the number of contexts) (despite line 227 mentions that the time complexity is in supplementary material…)
In the main paper we only consider the exact version of LINC, which searches for the optimal partitioning by exhaustively considering all possible partitions. This provides guarantees on the output, at the cost of an indeed exponentially sized search space. As the number of contexts in real-world datasets is typically fairly small, e.g. less than ten contexts for the data by Sachs et al. (2005), we consider the exact version of LINC applicable in practice.
In the Appendix, we additionally provide a faster, heuristic version of LINC that employs clustering to find a good partitioning of the contexts. Although it does not come with theoretical guarantees, it does performs equally well in practice and does so much faster. As we are primarily interested in the guarantees, we consider this version of LINC as a 'fun' additional result, rather than a main contribution.
**Clarity**
> In particular, “algorithmic independence” seems very crucial, but it is just referenced without properly articulating its definition or the difference to conditional independence.
That is a good point, we will include the definition in the paper.
In effect, as noted in Postulate 2.1, the algorithmic independence of causal mechanisms amounts to the idea that compressing them independently of one another is optimal.
> It is less clear whether C is part of data or not.
$C$ is indeed part of the data, that is, we need to know in advance which context the data samples come from.
> In Theorem 3.1, lambda seems a bit abrupt. Is this the hyperparameter for RBF?
You are right, and we will introduce $\lambda$ more clearly. It is a hyperparameter that we need for proof to bound the score differences between the true DAG and the one minimizing our score.
> Instead of presenting Theorem 3.2 before 3.3, you may introduce 3.3 and just make 3.2 corollary of 3.3 where |C| goes infinity. (or does 3.3 make an additional assumption (e.g. fixed partition size) which is not required for 3.2?)
Yes, Thm. 3.3. makes the assumption of fixed partition sizes which is not required for Thm. 3.2, which is why we presented them in this order. We will fix our assumption numbering to make this clear.
>I am not entirely sure whether the expression in Assumption 5 is valid. In the following paragraph the authors mentioned that partitions are random variables. Does it mean that C is a random variable where \Pi is just a deterministic function wrapping C?
Exactly, you can see $\Pi$ as deterministic functions of C. The result is a categorical random variable that indicates the group membership, rather than context membership, of samples. While this is a slight abuse of notation, we want to consider $\Pi$ as random variables here so that we can assume their independence.
>In Figure 1(b), Lines 40-41 mention that X and Z are changed. But the Figure 1(a) shows the change wrt context is the function of Y not X nor Z. What am I missing?
As you say, the function of $Y$ changes in both examples, but in addition, there are causal functions of $X$ and $Z$ that also change. In Fig. 1(b), we show what happens when $X, Z$ both have two causal functions, resp. five functions in Fig. 1(c). We compare these two situations to show that the true causal functions for $Y$ are not affected by this (independent).
In the illustration, we omitted the causal functions for $X, Z$ to avoid clutter, but we can include them to make it more clear.
>In Sec 3.5 I couldn’t parse “Given the limited number of distinct context distributions … “ I couldn’t see any P( C ) defined clearly.
Our writing is unclear here; in general, the number of contexts can be arbitrarily large, but in practical applications, we usually only have few contexts available, for example, we have fewer than ten interventional experiments in the data by Sachs et al. (2005).
We are happy to respond to additional concerns and questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and I am happy to keep my positive assessment of the paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your quick response, we will take your points into account in the revised version. | Summary: This paper addresses the problem of causal discovery with heterogenous data coming from multiple contexts where contexts are characterized by soft/hard interventions. Previous work differs in assumptions on how non-iid data is produced, the primary assumption being the Sparse Mechanism Shift assumption which assumes that the number of mechanism changes is small. The current paper proposes a score-based approach to discovering the casual graph from data coming from different contexts. For each variable, the set of contexts can be partitioned where in each bin of the partition, the mechanism that represents the cause-effect relationship between the variable and its parents is unchanged. The score-based approach comprises of a) functionally modeling the relationship between variables and their parents using Gaussian processes (GPs) for each context bin and b) computing a score that is based on minimum description length (MDL) of the GP model. The paper provides identifiability guarantees for identification up to its Markov Equivalence class (MEC). With an additional assumption about independence of context-partitions the authors are able to obtain asymptotic (in the number of contexts) identification guarantees beyond the MEC. Numerical results on synthetic, semi-synthetic and real datasets validate the proposed method's superiority over existing methods.
Strengths: This paper provides a novel algorithm for an important problem of causal discovery using heterogenous data. The main strength of the paper lies in the experimental validation of its proposed method, LINC. The evaluation is done on multiple types of datasets and LINC is shown to outperform or is at least competitive to existing methods on all. While, the idea of using MDL-based scores is not novel in causal discovery, the idea of combining Kakade et. al.'s work in causal discovery is novel. On the theoretical side, while I have questions about the assumptions that I elaborate in the sections below, the paper proves identifiability guarantees in both the asymptotic (in number of contexts) and and finite-context settings.
Weaknesses: Listing out a few weaknesses:
1) Justification of assumptions: The core result of identifiability beyond the MEC depends on the independent mechanism shift assumption. I wasn't convinced about why this assumption makes sense. In particular, why does the assumption imply Line 184-185. I also didn't find any practical justification of the fixed-partition sizes assumption. In general, justifying the assumptions more clearly and with a practical example in mind can help strengthen this weakness.
2) Writing: The writing can be improved greatly. There are undefined notations and a lot of important content of the paper that has not been explained in the first 9 pages. Some examples: a) The main score function in Line 161 is not explained or defined anywhere. Given that LINC is explained only in the appendix, I found it difficult to even understand what the score function was. b) Independent mechanism shift also contains notation \Pi_i(C) which hasn't been defined before (but is clear from context). c) Assumption numbers mismatched. d) words used without defining - algorithmically independent in line 109, direct intervention in Line 34,
3) Experiments: Most of the evaluation is based on recovery of the causal graph whereas identification guarantees also address recovering the partitions. Can some experimental results be shown to verify if the partitions are being discovered correctly?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Overall, I think this is a potentially good submission subject to clarifications on a few questions, some of which I have added in the weaknesses section, and some that I outline below:
1) Algorithm 1 (LINC) has a step where it cycles through candidate graphs in MEC.
2) LINC seems to be doing quite well even with just iid data and also in cases with very few contexts. Any explanation about this phenomenon would be interesting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors don't address any potential negative impacts explicitly. Since this is an incremental advance of an established line of research, I don't see an issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your detailed feedback.
**Independent Mechanism Shift Assumption**
> The core result of identifiability beyond the MEC depends on the independent mechanism shift assumption. I wasn't convinced about why this assumption makes sense.
The assumption is easiest to understand in the context of interventional experiments. Consider a gene knockout intervention. This is an exogenous influence and independent of the causal interactions between other genes. Similarly, Sachs et al. (2005) perform soft interventions by adding compounds that activate or inhibit specific molecules. In both examples, a mechanism change is due to an external influence and does not depend on the causal mechanism of other variables. In general, this independence holds in generic situations where there is no confounding behind cause and effect (Huang et al. 2020).
> In particular, why does the assumption imply Line 184-185.
*Lines 184-185 from the paper:*
> we consider our partitions $\Pi$ as independent random variables. This ensures that incorrectly directed edges lead to larger numbers of modeled mechanisms and thus larger values of our score $L$.
To see this, consider the example in Fig. 1 of the manuscript. The true partition for $Y$ has two groups, and the partition of its child $Z$ has either two (Fig. 1b) or five (Fig. 1c) groups. In the true causal model, the partitions for $Y$ and $Z$ are independent. Indeed, in the causal direction, we see no differences in our MDL scores between both cases (green curves in Fig. 1b, 1c) and discover the true partition for $Y$ (green star). If we consider an anticausal function $Z \to Y$, in contrast, there is a difference between both cases (red curves). The partition we discover for $Y$ depends on the partition of $Z$, and in both cases it has more groups and larger score values compared to the causal direction.
In the proof of Thm. 3.2, we show more generally that under independence of the true partitions, incorrectly directed edges lead to more groups and larger score values of the discovered partitions.
**Fixed Partition Sizes**
> I also didn't find any practical justification of the fixed-partition sizes assumption.
By fixing partition sizes, we fix the number of different causal mechanisms for each of the variables $X_i$. In effect, we assume that all heterogeneity in causal effects is captured in our model.
E.g., If we studied the differences between treatments in different hospitals, as we obtain more data, these data would be obtained from the *same* set of hospitals, rather than by the addition of new hospitals. By ensuring that the partition sizes are fixed, we ensure that the things we are trying to learn do not change as we obtain more data.
**Experiments**
> Can some experimental results be shown to verify if the partitions are being discovered correctly?
This is a good point. To illustrate this, we sample causal DAGs with our base settings. We pick a variable at random and evaluate whether LINC discovers the partition that shows its mechanism shifts. To do so, we count the context pairs that LINC correctly assigns to a different group (TP), the same group (TN) or mistakenly to a different (FP) or same group (FN). We report precision, recall, and F1 over 100 runs.
| | Recall | Precision | F1 |
|-|-|-|-|
| **Mechanism Shift** |0.88|0.83|**0.85**|
| Noise Scaling |1.0|0.84|**0.91**|
| Hard Iv. (synthetic) |0.62|0.62|**0.62**|
| Hard Iv. (SERGIO) |0.68|0.68|**0.68**|
The first row shows the case where the causal mechanism of our variable changes, and we confirm that LINC discovers a high fraction of mechanism changes (recall) with few false discoveries (precision). We also included the cases where our model is misspecified, where LINC scales especially well to noise intervention.
**Algorithm**
> Algorithm 1 (LINC) has a step where it cycles through candidate graphs in MEC.
As candidate graphs, we start from all DAGs within the Markov Equivalence class of the true DAG. If this class is unknown, there exist standard methods to infer it [1].
**The i.i.d. Case**
> LINC seems to be doing quite well even with just iid data and also in cases with very few contexts. Any explanation about this phenomenon would be interesting.
This is an interesting observation that can give an insight into the benefits of an MDL-based approach. For the i.i.d. case, we refer to identifiability results by Marx and Vreeken (2019) [2]. The idea is that under nonlinear additive noise models, regression residuals are independent in the causal, but not in the anti-causal direction. The authors show that MDL-based scoring functions can capture this, as residual independence results in a better compression in the causal direction. This explains why our MDL-based score can determine causal directions on i.i.d. data, while the mechanism shift counting scores recover the MEC (Fig. 2a).
As for the case of very few contexts, it is known that under optimal intervention sets, only $O(\log(m))$ contexts suffice to obtain full identifiability for arbitrary graphs, suggesting that in general only few contexts are likely needed. [3] Similarly, from our theoretical results we find that for the case with only two mechanisms per variable occurring with probabilities $p = 0.5$ each, identifiability is highly probable when we observe around 20 contexts even when the ground truth contains 10000 edges.
Please let us know if you have additional concerns.
**References**
[1] Marx, Alexander, and Jilles Vreeken. “Identifiability of Cause and Effect Using Regularized Regression.” In *KDD*, 2019.
[2] Joris M. Mooij, Sara Magliacane, and Tom Claassen. 2020. "Joint causal inference from multiple contexts." *J. Mach. Learn. Res.* 21, 1, Article 99 (January 2020), 108 pages.
[3] Hauser, Alain, and Peter Bühlmann. "Two optimal strategies for active learning of causal models from interventional data." *International Journal of Approximate Reasoning* 55.4 (2014): 926-939.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the detailed response. My concerns have been largely addressed. I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will update our manuscript accordingly. | Summary: The authors study the problem of causal discovery from data under different conditions/contexts. The approach uses an algorithmic model of causation, where the idea is that causal mechanisms provide short (or simple) descriptions of the observed data. Under this principle, the authors propose a score function for models where the functional mechanisms are Gaussian processes (GPs). The main contributions of this work are Theorems 3.1, 3.2, and 3.3, where the authors show identification of the MEC or ground-truth DAG. The core assumptions for such theorems are:
* Causal sufficiency
* All contexts share the same DAG
* Additive noise model with GPs
* Causal minimality
* $\Pi$-faithfulness
* Independence of mechanism shifts
The authors develop LINC to learn the causal DAG from different contexts, and provide some experiments to validate their results.
Strengths: ### Potential Reasons for Acceptance
- The paper provides a fair investigation into the core assumptions that many causal modeling approaches, including SMS, rely upon, and moves the field forward by addressing these assumptions.
- Novelty of the proposed approach "LINC": By adopting the algorithmic notion of independence and Gaussian Process models, the authors elevate the ability to identify the accurate causal model and extend the scope beyond partially directed graphs.
- Identifiability theory: The authors provide theoretical justification for their approach, followed by some empirical evaluations on both synthetic and real-world datasets which aim to validate their claims.
Weaknesses: ### Potential Reasons for Rejection
- The DAG from a single context is identifiable as it is a nonlinear additive noise model.
- Theorem 3.2 seems very unrealistic when considering infinite number of contexts. While the authors try to provide some justification to it by providing a finite-sample statement in Theorem 3.3, I got confused with the idea of $C$ falling into some "bins". What does it mean for $C$ to fall into two or more different bins? The notion of bins were never used until this theorem and was not properly described in my opinion.
- Dependency on number of datasets:
- The effectiveness of the proposed model heavily depends on certain characteristics of the dataset such as having enough contexts, which might not always be the case.
- Complexity of the proposed solution:
- The proposed solution is relatively complex, particularly concerning the computational cost. Several experiments are performed on a very small number of nodes, e.g., six-node graphs.
- Some heuristics are provided in the appendix to alleviate this issue, however, some details are missing. For example, in Figure 8(b), MC seems faster and obtains similar F1 scores to LINC.
- The writing can be improved by a fair margin. Learning a DAG from different contexts is not a new setting and the authors could greatly reduce the amount text of somewhat repetitive definitions of the model in Line 69, Assumption 1 and Assumption 2. Indeed Assumption 1 seems useless if Assumption 2 already holds, no? If the theoretical results rely on GPs, why not simply state that in the problem setting? Finally, Theorems 3.1 and 3.2 mention a "sufficiently small $\lambda$", but such $\lambda$ does not appear in the result. By looking at the text there is a $\lambda$ from the RBF in Line 139; however, in the appendix I got confused as it seems to refer to a different quantity.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Why is $|E|$ in the lower bound in Theorem 3.3. inside exp? I might be missing something but doesn't this come from the union bound and, hence, shouldn't it be outside exp?
* My main criticism of this work is that the model under consideration is identifiable from observation data (Peters et al., 2014). Moreover, since the authors assume that **all contexts share the same DAG**, why couldn't one identify the DAG from a single context?
* Please fix the assumption numbering in the theorem statements.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Briefly described in Section 6. No major concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and detailed comments, which we address in the following.
**Identifiability given a single context**
> The DAG from a single context is identifiable as it is a nonlinear additive noise model.
While the DAG is indeed identifiable for *many* generic functions, the results of Peters (2014) [1] are obtained by assuming differentiability of the underlying distributions as well as causal mechanisms.
By modeling our causal mechanisms via GPs, these assumptions are not applicable since individual GP paths are in general not differentiable. For our more general setting, we are not aware of proofs that the DAG is identifiable from a single context.
Peters (2014) [1] give examples where the causal direction is not identifiable via independence of residuals (Ex. 2). Given these limitations, it is interesting to explore independence of causal mechanisms as an alternative criterion, and we hope that our work improves the current theoretical understanding.
**Dependency on the number of contexts**
> Theorem 3.2 seems very unrealistic when considering infinite number of contexts.
Infinite numbers of contexts are of course highly unrealistic, which is why we provide refined bounds in Thm. 3.3. Note also that our results address the case where our data comes from random environments over which we have no control.
When we do have control over the environments, i.e., when we can actively perform interventions with known effects, then general results on the numbers of required contexts apply, requiring data from $O(\log(m))$ environments for $m$ variables. [2]
> While the authors try to provide some justification to it by providing a finite-sample statement in Theorem 3.3, I got confused with the idea of $C$
falling into some "bins". What does it mean for $C$ to fall into two or more different bins?
We agree that this description is confusing and will improve the explanation. The assumption that the partition sizes are fixed can be understood in a frequentist sense, in that on average each possible causal mechanism for each variable $X_j$ occurs in at least a constant fraction of all samples. For example, if we study the differences between treatments in different hospitals, as we obtain more data, these data would be obtained from the *same* set of hospitals, rather than by the addition of new hospitals.
> The effectiveness of the proposed model heavily depends on certain characteristics of the dataset such as having enough contexts, which might not always be the case.
Note that in Theorem 3.3 we show how many contexts it takes us to identify both the causal graph *as well as* the partition of the causal mechanisms across environments. If we care only about identifiability of the causal graph, then two mechanisms per variable (for those variables which are not already directed by Meek's rules) suffice, which dramatically reduces the number of contexts required. In particular, if we obtain samples with equal probabilities for each mechanism, then we require only 20 contexts to orient up to 10000 edges with probability >0.99, independently of the precise structure of the graph.
While it is true that these 20 contexts are a lot more than the ~8 required for optimal intervention design for causal learning [1], our results do not require us to have control over the intervention targets, an assumption which would in many cases be *even more unrealistic*.
We also concede that observing a system in multiple contexts at all is an assumption that might not always be realistic. Exactly this point, however, can be an argument for using LINC instead of mechanism shift scores, since LINC already works well given few, or even a single, context, as we observed in our evaluation. As an explanation of this phenomenon, we refer to Marx and Vreeken (2019) [3] who give identifiability results for i.i.d. data with MDL-based scoring functions, via independence of regression residuals.
In light of this, we do not consider the dependence on the number of contexts a limiting concern in practice.
**Other concerns**
> The proposed solution is relatively complex, particularly concerning the computational cost.
We consider our theoretical insights our main contribution while providing a proof-of-concept implementation. We also see the complexity of the solution as a current limitation, and our efficient heuristics can be further optimized (as you point out, for example, the current solution is slower than MSS in some cases).
> Why is $|E|$ in the lower bound in Theorem 3.3. inside exp? I might be missing something but doesn't this come from the union bound and, hence, shouldn't it be outside exp?
This is not a union bound on probability of the union of some set of events, it is a bound on the intersection of multiple events. In effect, we assume that every edge has to be directed independently of all others. This is in fact a *lower bound* on the probability, since determining the direction of some edges can allow for orienting others using Meek's rules.
> [...] the authors could greatly reduce the amount text of somewhat repetitive definitions of the model in Line 69, Assumption 1 and Assumption 2.
Thank you for your suggestions on shortening the problem setting, we will remove Assumption 1 as it is subsumed by Assumption 2.
Please let us know whether we could address your concerns. We are happy to respond to any further questions and concerns.
**References**
[1] Jonas Peters, Joris M. Mooij, Dominik Janzing, and Bernhard Schölkopf. 2014. "Causal discovery with continuous additive noise models." *J. Mach. Learn. Res.* 15, 1 (January 2014), 2009–2053.
[2] Hauser, Alain, and Peter Bühlmann. "Two optimal strategies for active learning of causal models from interventional data." *International Journal of Approximate Reasoning* 55.4 (2014): 926-939.
[3] Marx, Alexander, and Jilles Vreeken. “Identifiability of Cause and Effect Using Regularized Regression.” In *KDD*, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. I did not have further clarification questions. I will make sure to carefully include your responses during the reviewers' discussion period.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We will revise our manuscript according to your comments. | Rebuttal 1:
Rebuttal: **Summary Response to All Reviewers**
We thank all reviewers for their detailed comments and summarize our response to the main concerns below.
- **Identifiability from a single, few, or many contexts:** We clarify under which conditions the causal DAG is identifiable from a single context to motivate why we study identifiability from multiple contexts (`t48a`). We also state the implications of Thm. 3.3. on the number of contexts needed for DAG identifiability when we have no active control over the interventions (`t48a`), as well as explain the observation that LINC already discovers causal directions given a single context (``LpAE``).
- **Additional explanations**: We provide additional explanations on our assumptions in the individual responses (``LpAE``,``oMbi``, ``NaRE``). We will add these to the manuscript as well as fix our assumption numbering.
We kindly ask the reviewers to take our responses into account in their scores, as we think that the feedback helped us strengthen the presentation of our work. We are happy to respond to follow-up questions. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
WITRAN: Water-wave Information Transmission and Recurrent Acceleration Network for Long-range Time Series Forecasting | Accept (spotlight) | Summary: To capture semantic information and repetitive patterns concurrently, the authors propose WIT framework. By utilizing bi-granular information transmission and HVGSU, the framework can model the inherent repetitive pattern as well as correlation of time series. The author also use a generic RAN to reduce time complexity. Several experiments conducted by te authors demonstrate that their framework can outperform in time series forecasting.
Strengths: S1: The logic and presentation of the essay is easy to follow.
S2: The problem they focus on is very essential as well as appealing.
S3: Experiments reveal the performance of the authors’ model outperforms existing SOTA baselines.
Weaknesses: W1: The authors may include more experiments such as robustness check to further evaluate the performance of the framework.
W2: There are some typos and minor errors that do not influence the understanding of this work, which should be carefully checked. For instance, line 29, semantic information include-> semantic information includes
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: Can authors include more experiments to further evaluate the performance of the framework.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not Applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are sincerely grateful to Reviewer 2QPt for their constructive feedback and recognition of our work.
> Q1: The authors may include more experiments such as robustness check to further evaluate the performance of the framework. Can authors include more experiments to further evaluate the performance of the framework.
Thank you for your suggestion. Here, we have followed MICN and introduced a simple white noise injection to demonstrate the robustness of our model. Specifically, we randomly select a proportion $\varepsilon$ of data from the original input sequence and apply random perturbations within the range $[-2X_{i}, 2X_{i}]$ to the selected data, where $X_{i}$ denotes the original data. After the noise injection, the data is then used for training, and the MSE and MAE metrics are recorded.
Due to space constraints, we have included the table for this section in a separate PDF file. Please refer to Table B in the newly submitted PDF file for specific results.
As the perturbation proportion $\varepsilon$ increases, there is a slight increase in the MSE and MAE metrics in terms of forecasting. It indicates that WITRAN demonstrates good robustness when dealing with less noisy data (up to 10%), and it possesses a significant advantage in effectively handling various abnormal data fluctuations.
> Q2: There are some typos and minor errors that do not influence the understanding of this work, which should be carefully checked. For instance, line 29, semantic information include-> semantic information includes
We greatly appreciate to the reviewer for conducting a meticulous and comprehensive review of our paper, including the appendix. We will conduct a careful check and correct these typos and minor errors in the subsequent versions of the paper.
---
Rebuttal Comment 1.1:
Title: discussion
Comment: Dear Reviewer 2QPt,
Thank you for being a reviewer for NeurIPS2023, your service is invaluable to the community!
The authors have submitted their feedback.
Could you check the rebuttal and other reviewers' comments and start a discussion with the authors and other reviewers?
Regards,
Your AC | Summary: This paper focuses on the long-range time series forecasting problem. An interesting model, Water-wave Information Transmission and Recurrent Acceleration Network is proposed, which captures both short- and long-term recurrent patterns via bi-granular information transmission. The proposed model also captures global and local correlations using horizontal and vertical information transmission. This model is an interesting modification of the RNN network, and it significantly outperforms many transformer-based models.
Strengths: S1. This paper reviews the shortcomings of transformer models and makes modifications to the RNN structure.
S2. A bi-granular information transmission is proposed to capture short- and long-term recurrent patterns, which is easy to understand and does not require additional methods (e.g., FFT) to extract periodicity.
S3. A recurrent acceleration network is proposed, which reduces the time complexity to O(√L) while maintaining the memory complexity at O(L).
Weaknesses: W1. Some previous RNN-based models, such as ConvLSTM [1]/ PredRNN [2]/ PredRNN++ [3], have yet to be compared.
[1] Shi X, Chen Z, Wang H, et al. Convolutional LSTM network: A machine learning approach for precipitation nowcasting[J]. Advances in neural information processing systems, 2015, 28.
[2] Wang Y, Long M, Wang J, et al. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms[J]. Advances in neural information processing systems, 2017, 30.
[3] Wang Y, Gao Z, Long M, et al. Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning[C]//International Conference on Machine Learning. PMLR, 2018: 5123-5132.
W2. Some minor problems:
1) It is recommended to enlarge Figure 1(i) and make horizontal lines clearer.
2) It is recommended to unify TF_{en} and TFE_{de}.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. Please explicitly explain the advantages of the WITRAN model over other forecasting models from the perspective of information transmission process.
Q2. Please give the adaptation and limitation of the WITRAN model.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the valuable comments and recognition provided by Reviewer Ku6W regarding our work.
> Q1: Some previous RNN-based models, such as ConvLSTM / PredRNN / PredRNN++, have yet to be compared.
Thank you for your suggestion. We have supplemented the experimental results for these methods. Due to space limitations, we have included the specific experimental results of this section in the Global Rebuttal. From the experimental results in this section, combined with the findings from Tables 2 and 3, provide further comprehensive evidence that WITRAN outperforms the state-of-the-art methods.
> Q2: Some minor problems: (1) It is recommended to enlarge Figure 1(i) and make horizontal lines clearer. (2) It is recommended to unify TF_{en} and TFE_{de}.
Thanks a lot for reviewing our paper including appendix carefully and comprehensively. We will address the minor problems in the subsequent versions and conduct a more meticulous check of the paper.
> Q3: Please explicitly explain the advantages of the WITRAN model over other forecasting models from the perspective of information transmission process.
For long-range time series forecasting tasks, previous studies have highlighted two crucial aspects of semantic information. On one hand, there is the presence of long-term and short-term periodic semantic information. On the other hand, there is the consideration of local-global semantic information. WITRAN's advantages can be summarized as follows: (1) Through its bi-granular information transmission, has the capability to directly capture long- and short-term periodic semantic information in long-range time series. Additionally, it shortens the information transmission path, effectively solving the issues of gradient explosion/vanishing when dealing with long-range time series. (2) Utilizing its recurrent structure, WITRAN gradually captures local semantic information and integrates it into the global semantic context without the need for additional layers. (3) WITRAN can effectively capture both of the key semantic aspects mentioned in (1) and (2), which was not achievable with previous methods. (4) In WITRAN, RAN could serve as a universal framework for integrating other models to facilitate information fusion and transmission, and reduce the time complexity to $\mathcal{O}(\sqrt{L})$ while maintaining the memory complexity of $\mathcal{O}(L)$.
> Q4: Please give the adaptation and limitation of the WITRAN model.
Through extensive experiments on different datasets and tasks, we have demonstrated the strong performance of WITRAN in various domains, such as energy, traffic, and weather forecasting. The results presented in Tables 2 and 3 indicate that WITRAN performs exceptionally well in long-range and ultra-long-range sequence forecasting tasks, showcasing its adaptability to such scenarios. Additionally, the RAN framework of WITRAN serves as a generic acceleration framework, as effectively illustrated in Section 4.2. However, it is worth noting that WITRAN does have its limitations, as mentioned in the Conclusions section. Specifically, the Python-based implementation of WITRAN is not as efficient as the nn.GRU / nn.LSTM implementations in PyTorch, which are based on C++. Therefore, we plan to explore the integration of WITRAN into an interface using C++.
---
Rebuttal Comment 1.1:
Title: discussion
Comment: Dear Reviewer Ku6W,
Thank you for being a reviewer for NeurIPS2023, your service is invaluable to the community!
The authors have submitted their feedback.
Could you check the rebuttal and other reviewers' comments and start a discussion with the authors and other reviewers?
Regards,
Your AC | Summary: The paper studies a Water-wave Information Transmission and Recurrent Acceleration Network (WITRAN) framework to model dependencies in a long historical time series. Inspired by Timesnet, the WITRAN introduces a water-wave information transmission strategy to model temporal information. This is implemented by the proposed Horizontal Vertical Gated Selective Unit (a kind of recurrent unit and similar to GRU).
Strengths: 1) This paper is well-written
2) Figures illustrate model details in a good way
3) This paper studies the single-channel case, though most existing papers consider evaluations on multivariate time series. This makes sense as this can better evaluate the ability to learn temporal dependencies in time series.
4) Advanced baselines such as MICN, timesnet, patchTST, and film are included.
Weaknesses: 1) Although using the water-wave structure is new to me, I am still curious about why this design is needed in long-range time series forecasting. Especially, are there any special temporal structures that existing models in Figure a-g cannot handle?
2) The water-wave structure is a strategy to model time series dependencies. However, it is unclear how to decide the number of R and C and what are their effects on the final prediction.
3) In Figure 10 (appendix), the results of the proposed are much worse than other baselines: FiLM and Pyraformer. But you claimed that “our model WITRAN gives the best performance among different models” in Section J. Moreover, other plots in Figure 11-31 did not demonstrate the effectiveness of WITRAN.
4) The reference part is missing.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Questions are as follows:
- Are there any special temporal structures that existing models in Figure a-g cannot handle?
- How to decide the number of R and C
- How to set the norm in (1).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Some limitations in terms of implementation have been mentioned in the Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our sincere gratitude to Reviewer r3j3 for their comprehensive review, which included thought-provoking questions and valuable insights.
> Q1: Although using the water-wave structure is new to me, I am still curious about why this design is needed in long-range time series forecasting.
We appreciate your recognition of the novelty of our work. The advantages of applying the water-wave structure to long-range time series forecasting can be summarized as follows:
(1) Bi-granular Design: The water-wave structure employs a bi-granular design, which offers two advantages. Firstly, it shortens the information transmission path, allowing for efficient information flow. Secondly, it enables the exploration of hidden long-term and short-term repetitive patterns in the sequences. These advantages are particularly beneficial for long-range prediction tasks.
(2) Simultaneous Information Transmission: The water-wave structure facilitates simultaneous horizontal and vertical information transmission. This concurrent transmission allows for accelerated processing within the model, thereby improving efficiency.
(3) Global Relationship Modeling: With the water-wave structure, the global relationships in long-range time series can be captured effectively using a single layer, as illustrated in Figure 1. This reduces the complexity of the network and the training difficulty.
By leveraging these advantages, the water-wave structure demonstrates its effectiveness and efficiency in long-range time series forecasting tasks.
> Q2: Are there any special temporal structures that existing models in Figure a-g cannot handle?
Modeling global and local correlations, and discovering long- and short-term repetitive patterns is crucial for accurate long-range time series forecasting. Previous works have partially addressed these issues separately, but have not been able to address all of them simultaneously, such as facing challenges in capturing hidden semantic information directly from point-wise input tokens and so on. In order to compare the differences between WITRAN and the model a-g in Figure 1 more clearly, we have prepared the following table to highlight the advantages of WITRAN.
| **Advantages** | (a) RNN | (b) CNN | (c) Full Attention | (d) LogTrans | (e) Pyraformer | (f) MICN | (g) PatchTST | (h) TimesNet | (i)WITRAN(ours) |
|:-----------------------------------------------------------:|:-------:|:-------:|:------------------:|:------------:|:--------------:|:--------:|:------------:|:------------:|:---------------:|
| **Non point-wise semantic information capture** | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| **Special design to capture long-term repetitive patterns** | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ |
| **Efficiently (1 or 2 layers) model global correlations** | ✓ (1) | ✗ | ✓ (1) | ✗ | ✗ | ✓ (2) | ✓ (2) | ✗ | ✓ (1) |
| **Well solve the gradient vanishing/exploding problem of RNN** | ✗ | - | - | - | - | - | - | - | ✓ |
> Q3: How to decide the number of R and C and what are their effects on the final prediction.
$C$ reflects the periodicity of short-term repetitive patterns, while $R$ reflects the coarse-grained periodicity of long-term repetitive patterns. Therefore, the value of $C$ is related to the specific forecasting task, and we also referred to previous work by setting the value of $C$ as multiples of 12. The value of $R$ is determined by the sequence length $L$ and $C$, specifically $R = L/C$. For more detailed descriptions and experimental analysis, please refer to Section 3.1 and Section I.1.
> Q4: How to set the norm in (1).
The selection of the $norm$ in (1) is adaptively chosen by the model on the validation set based on the characteristics of the dataset and the task. For more details on this aspect, please refer to Section I.1.
> Q5: In Figure 10 (appendix), the results of the proposed are much worse than other baselines: FiLM and Pyraformer. But you claimed that “our model WITRAN gives the best performance among different models” in Section J. Moreover, other plots in Figure 11-31 did not demonstrate the effectiveness of WITRAN.
Thank you for pointing that out. The forecasting case figures in the appendix are segments randomly selected from various tasks of different datasets, aiming to demonstrate a fair overview of WITRAN's performance when combined with Tables 2 and 3. However, random selection may occasionally include segments that are not fully representative, such as the one you specifically mentioned, Figure 10, which can indeed lead to misunderstandings. Therefore, to further demonstrate the advantages of WITRAN, we have re-randomly selected four additional cases for this task. Please refer to the PDF file we have newly submitted for specific details, where the advantages of WITRAN become apparent.
Additionally, we have conducted metric calculations for Figure 11 to Figure 31. Due to the character limit, we have also included this section in the PDF file. Please refer to Table A in the PDF file for the specific results.
From the results in Table A, it can be observed that among all the randomly selected original cases, except for 5 cases, WITRAN consistently demonstrated superior performance. Thus, this provides ample evidence to support the overall superiority of WITRAN.
> Q6: The reference part is missing.
In the original version, we placed the references in the Supplementary Material. In the revision, we will place it in the main content.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. The following are the remaining questions:
Since the main contribution is the water-wave WITRAN structure (based on bi-granular Design), I believe **it is related to recurrent neural network variants like dilated RNN [1], sliced RNN [2], and other RNNs with residual/skip connections [3]**. It would be helpful if you could explore and discuss the connections between WITRAN and these related works.
[1] Dilated Recurrent Neural Networks (2017)
[2] Sliced Recurrent Neural Networks (2018)
[3] End-to-end time series imputation via residual short paths (2018)
In Q2, it is mentioned that RNNs lack a specific design to capture long-term repetitive patterns and struggle with the gradient vanishing/exploding problem. What about LSTM/GRU? (I only found a discussion on their time and memory consumption in Appendix C.) LSTM/GRU architectures include cells and gates that aid in capturing both long-term and short-term information. Additionally, is there evidence provided to support that WITRAN can overcome the gradient vanishing/exploding problem of RNNs?
Furthermore, in Q2, I believe the claim "Previous works have partially addressed these issues separately, but have not been able to address all of them simultaneously" is inaccurate. Numerous previous time-series prediction works and RNN variants (such as multiscale RNNs) have been developed to incorporate both long-term and short-term information.
I have another question about the visualization results (Q5). Since the WITRAN model heavily relies on long-term repetitive patterns, I am unclear about its main advantages compared to methods that utilize periodic patterns or neural basis approximation, such as FiLM or NBeats. For instance, in Figures 29 and 43, for the ETTh1 and Weather datasets (where significant repetitive patterns appear to be absent), WITRAN and FiLM generate very similar predictions.
Overall, I thoroughly enjoyed reading this paper, especially due to its bi-granular design and the use of univariate evaluation in the experiments. I believe addressing these questions will further enhance the overall quality of the paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer r3j3 - Part 1
Comment: We greatly appreciate your recognition of our work and your suggestions to further enhance the overall quality of the paper. Due to space limitations, we will divide our response into three parts.
**[Part 1]**
> Q7: Since the main contribution is the water-wave WITRAN structure (based on bi-granular Design), I believe it is related to recurrent neural network variants like dilated RNN [1], sliced RNN [2], and other RNNs with residual/skip connections [3]. It would be helpful if you could explore and discuss the connections between WITRAN and these related works.
[1] Dilated Recurrent Neural Networks (2017)
[2] Sliced Recurrent Neural Networks (2018)
[3] End-to-end time series imputation via residual short paths (2018)
We sincerely appreciate your suggestions. We have compared these methods with WITRAN, and here is the comparison:
| **Advantages** | DilatedRNN [1] | SlicedRNN [2] | RIMP-LSTM [3] | WITRAN (ours) |
|:------------------------------------------------------------------------------------:|:--------------:|:--------:|:-------------:|:-------------:|
| **Efficiently (1 layer) model global correlations** | ⍻ | ⍻ | ✓ (1) | ✓ (1) |
| **Special design to capture long-term repetitive patterns** | ✓ | ✗ | ✗ | ✓ |
| **Using 1 layer to capture long- and short-term repetitive patterns simultaneously** | ✗ | ✗ | ✗ | ✓ |
| **Well solve the gradient vanishing/exploding problem of RNN** | ✓ | ✓ | ✓ | ✓ |
(1) Efficiently (1 layer) model global correlations: (a) When the dilations of DilatedRNN does not include the value 1, multiple layers need to be constructed to extract global correlations. (b) SlicedRNN improves efficiency to some extent by parallel processing of minimum subsequences, but it still requires the introduction of multiple layers to capture the global correlations of the sequence.
(2) Special design to capture long-term repetitive patterns: (a) SlicedRNN is unable to capture long-term repetitive patterns among elements of sub-sequences. (b) Although RIMP-LSTM incorporates Residual Paths and Residual Sum Unit designs, it still cannot effectively extract long-term repetitive patterns.
(3) Using 1 layer to capture long- and short-term repetitive patterns simultaneously: (a) DilatedRNN can capture long- and short-term repetitive patterns, but it requires the use of multiple layers to achieve this. (b) SlicedRNN and RIMP-LSTM are not particularly adept at handling long-term repetitive patterns, as mentioned in (2).
(4) Well solve the gradient vanishing/exploding problem of RNN: (a) In reference [1], the article states and provides formal proof that reducing the length of information paths between time steps can prevent the issues of gradient vanishing/exploding. DilatedRNN, SlicedRNN, and WITRAN tackle this problem by reducing the length of information transmission paths. (b) RIMP-LSTM addresses this issue by the designs of Residual Paths and Residual Sum Units.
---
Reply to Comment 1.1.2:
Title: Response to Reviewer r3j3 - Part 2
Comment: **[Part 2]**
> Q8: In Q2, it is mentioned that RNNs lack a specific design to capture long-term repetitive patterns and struggle with the gradient vanishing/exploding problem. What about LSTM/GRU? (I only found a discussion on their time and memory consumption in Appendix C.) LSTM/GRU architectures include cells and gates that aid in capturing both long-term and short-term information. Additionally, is there evidence provided to support that WITRAN can overcome the gradient vanishing/exploding problem of RNNs?
LSTM and GRU, through their gated designs, do help in mitigating the gradient vanishing/exploding problem to some extent. However, previous work has pointed out that, the approach (e.g. LSTM) that relies on a special type of linear unit with a self-connection does not explicitly address the issue of exploding gradients.
Furthermore, LSTM and GRU are indeed capable of handling sequences of regular-range and capturing both long-term and short-term repetitive patterns within them. However, when applied to long-range forecasting tasks, they can suffer from the issue of information forgetting, and more details can be found in Appendix B.
Therefore, these two points contribute to the difficulty of applying LSTM/GRU to long-range forecasting tasks.
Regarding WITRAN's ability to overcome the gradient vanishing/exploding problem, it has been indicated and formally proven in reference [1] (DilatedRNN) that reducing the length of information transmission paths can effectively prevent gradient vanishing/exploding problems. WITRAN addresses the issue of gradient vanishing/exploding by reducing the length of information transmission paths. Furthermore, in order to further validate it experimentally, we conducted experiments on the ETTh1 dataset with 8 tasks to verify the ability of LSTM/GRU in solving gradient vanishing/exploding. The results are shown below:
| **Tasks** | **168-168** | **168-336** | **336-336** | **336-720** | **720-720** | **720-1440** | **1440-1440** | **1440-2880** |
|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:-------------:|:-------------:|
| **LSTM** | **6/18** | **6/18** | **3/18** | **3/18** | 0/18 | **1/18** | 0/18 | **3/18** |
| **GRU** | **7/18** | **6/18** | **3/18** | **3/18** | 0/18 | 0/18 | 0/18 | **6/18** |
| **WITRAN** | 0/18 | 0/18 | 0/18 | 0/18 | 0/18 | 0/18 | 0/18 | 0/18 |
The A/B ratio in the table represents the ratio of the total number of parameters with a loss of NaN during training (A) to the total number of parameters in the search space (B). The parameters being searched include $d_\mathrm{model}$ and $e_\mathrm{layer}$. And the tasks in which the loss becomes NaN during the training process have been highlighted in bold.
From the above table, it can be observed that LSTM and GRU do help to some extent in mitigating the issues of gradient vanishing/exploding. However, they still face challenges in this regard. On the other hand, WITRAN addresses this problem effectively.
---
Reply to Comment 1.1.3:
Title: Response to Reviewer r3j3 - Part 3
Comment: **[Part 3]**
> Q9: Furthermore, in Q2, I believe the claim "Previous works have partially addressed these issues separately, but have not been able to address all of them simultaneously" is inaccurate. Numerous previous time-series prediction works and RNN variants (such as multiscale RNNs) have been developed to incorporate both long-term and short-term information.
The "these issues" refer to two aspects: (1) modeling global and local correlations, and (2) discovering long- and short-term repetitive patterns. Indeed, there have been some RNN variants proposed, as mentioned in your previous question (Q6). However, when applied to long-range time series forecasting tasks, they still struggle to effectively address both modeling global and local correlations and discovering long- and short-term repetitive patterns simultaneously:
(1) DilatedRNN: DilatedRNN is capable of capturing both long-term and short-term repetitive patterns. However, due to its multi-layer design, DilatedRNN still faces challenges in effectively modeling global correlations or capturing long- and short-term repetitive patterns simultaneously.
(2) SlicedRNN: SlicedRNN has the ability to parallelly process minimum subsequences. However, it still requires the design of multiple layers to capture global correlations. Moreover, it is unable to extract the long-term repeated pattern among elements of sub-sequences.
(3) RIMP-LSTM: RIMP-LSTM can effectively capture the global correlations through its 1-layer design. However, it still struggles to effectively extract long-term repetitive patterns.
> Q10: I have another question about the visualization results (Q5). Since the WITRAN model heavily relies on long-term repetitive patterns, I am unclear about its main advantages compared to methods that utilize periodic patterns or neural basis approximation, such as FiLM or NBeats. For instance, in Figures 29 and 43, for the ETTh1 and Weather datasets (where significant repetitive patterns appear to be absent), WITRAN and FiLM generate very similar predictions.
We sincerely thank you for your thorough review once again. Capturing long-term repetitive patterns is crucial for long-range forecasting tasks. However, at the same time, capturing short-term repetitive patterns is also crucial for the accuracy of the forecasting.
WITRAN relies on both long-term and short-term repetitive patterns in historical sequences for capturing periodic semantic information. In particular, as you mentioned in Figures 29 and 43, both WITRAN and FiLM are capable of forecasting a periodic fluctuation of peaks and valleys. However, FiLM's forecasting tend to be smoother, while WITRAN excels in predicting the upward/downward fluctuations present (high frequency data) in short-term periods. This further attests to the advantages of WITRAN. | Summary: This paper studies the problem of long-range time series forecasting problem and proposes a WITRAN model. The paper analyzes and compares previous forecasting methods from the perspective of information transmission process and design a water-wave information transmission mechanism, which simultaneously capture global and local correlations via a bi-granular information transmission. And a recurrent acceleration network is designed to reduce the computation complexity.
Strengths: 1) WITRAN is an ingeniously crafted framework, capable of simultaneously capturing two aspects of semantic information within long-range time series - global-local correlations and both long- and short-term periodic patterns. The WIT includes two innovative modules, namely, HVGSU and GSU, which augment forecasting accuracy while enhancing explainability. Additionally, the RAN segment is designed to markedly improve efficiency. Finally, the WIT and RAN segments are seamlessly integrated together.
2) The paper presents an exhaustive theoretical substantiation of the RAN segment's operating efficiency, shedding light on its capacity to significantly boost model efficiency.
3) WITRAN exhibits superior performance, as confirmed by a series of comprehensive experiments with equitable settings across all baselines. WITRAN outperforms SOTA methods in both long-range and ultra-long-range forecasting tasks. Moreover, the incorporation of RAN indeed enhances computational efficiency and reduces memory footprint, in alignment with theoretical proof.
Weaknesses: 1) It seems that the proposed model is suitable for the time series that naturally contains bi-granular periodicity, such as traffic flow with daily and weekly periodicity. So the generalization of the proposed model is unclear.
2) In the experimental section, many Transformer-based methods are taken as baselines, but few RNN-based methods are not compared. Since the proposed model is RNN-based, so it is necessary to add more RNN-based baselines.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) If the five datasets have enough representativeness among all kinds of time series?
2) The material included in Appendix A appears to be rather elementary for the target audience in this domain. I would be intrigued to understand the rationale behind the authors' decision to incorporate this content.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our sincere appreciation to Reviewer 1moF for their valuable insights and for recognizing the significance of our research.
> Q1: It seems that the proposed model is suitable for the time series that naturally contains bi-granular periodicity, such as traffic flow with daily and weekly periodicity. So the generalization of the proposed model is unclear.
WITRAN effectively captures both long- and short-term repetitive patterns through bi-granular information transmission. As long as the dataset contains periodicity and coarser-grained periodicity, WITRAN can effectively capture them. Our extensive experiments have demonstrated that WITRAN achieves outstanding performance in various fields, such as energy, traffic, and weather. Additionally, the RAN framework in WITRAN can serve as a generic framework for acceleration, highlighting its generality.
> Q2: In the experimental section, many Transformer-based methods are taken as baselines, but few RNN-based methods are not compared. Since the proposed model is RNN-based, so it is necessary to add more RNN-based baselines.
Thank you very much for your valuable suggestions. We have included several classic RNN-based methods, including convLSTM, PredRNN, and predRNN++. Due to space limitations, we have included the specific experimental results of this section in the Global Rebuttal. From the experimental results in this section and the results in Tables 2 and 3 provide further evidence that our method is optimal.
> Q3: If the five datasets have enough representativeness among all kinds of time series?
The benchmark datasets we used covers various domains and have different levels of granularity in their original collection. The previous works, including the baseline methods we selected, mostly use these datasets. Therefore, these datasets should have sufficient representativeness in covering time series.
> Q4: The material included in Appendix A appears to be rather elementary for the target audience in this domain. I would be intrigued to understand the rationale behind the authors' decision to incorporate this content.
This section's detailed description of LSTM and GRU does indeed seem somewhat elementary. Our main intention is twofold: (1) to provide a foundation in Appendix B, where we elaborate on the inspiration drawn from LSTM and GRU, and (2) to facilitate a clear comparison between our proposed HVGSU and the differences between LSTM and GRU for the convenience of readers.
---
Rebuttal Comment 1.1:
Title: discussion
Comment: Dear Reviewer 1moF,
Thank you for being a reviewer for NeurIPS2023, your service is invaluable to the community!
The authors have submitted their feedback.
Could you check the rebuttal and other reviewers' comments and start a discussion with the authors and other reviewers?
Regards,
Your AC
---
Rebuttal Comment 1.2:
Comment: The authors have addressed all my previous concerns.
I will keep my score as 8.
---
Reply to Comment 1.2.1:
Title: Thanks
Comment: Thanks for your time and positive comments! | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading, and detailed and considerate feedback.
# 1 The Supplementary Baseline Experimental Results
Due to space limitations, we will report the results of the baseline experiments we conducted in this section. We have included several classic RNN-based methods, including convLSTM, PredRNN, and PredRNN++. The performance of these methods is as follows:
| Dataset | Task | ConvLSTM | | PredRNN | | PredRNN++ | |
|:-------:|:---------:|:---------:|:---------:|--------:|:---------:|:----------:|:---------:|
| | | MSE | MAE | MSE | MAE | MSE | MAE |
| ECL | 168-168 | 1.10061 | 0.86419 | 1.59968 | 0.97815 | 1.05324 | 0.81715 |
| | 168-336 | 0.95128 | 0.79262 | 1.3981 | 0.92003 | 1.14599 | 0.84769 |
| | 336-336 | 1.06283 | 0.84397 | 1.24618 | 0.86953 | 1.14628 | 0.84583 |
| | 336-720 | 1.02897 | 0.83296 | 1.38004 | 0.88513 | 1.12283 | 0.84775 |
| | 720-720 | 1.16781 | 0.89208 | 1.14053 | 0.81585 | 1.0375 | 0.81916 |
| | 720-1440 | 1.19013 | 0.90671 | 1.38331 | 0.91472 | - | - |
| | 1440-1440 | 1.08107 | 0.86057 | 1.22349 | 0.88084 | - | - |
| | 1440-2880 | 1.12110 | 0.86696 | 1.23118 | 0.84768 | - | - |
| Dataset | Task | ConvLSTM | | PredRNN | | PredRNN++ | |
|:-------:|:---------:|:---------:|:---------:|--------:|:---------:|:----------:|:---------:|
| | | MSE | MAE | MSE | MAE | MSE | MAE |
| traffic | 168-168 | 1.9568 | 1.20513 | 2.18789 | 1.26295 | 2.03032 | 1.22555 |
| | 168-336 | 1.56089 | 1.00843 | 1.94531 | 1.19838 | 2.15307 | 1.25684 |
| | 336-336 | 1.53403 | 1.0047 | 1.94237 | 1.20825 | 2.01507 | 1.22229 |
| | 336-720 | 1.97317 | 1.1942 | 2.10299 | 1.23738 | 2.07428 | 1.23589 |
| | 720-720 | 1.97934 | 1.19838 | 2.24516 | 1.27154 | 2.05511 | 1.23179 |
| | 720-1440 | 1.94060 | 1.17002 | 2.12212 | 1.23488 | - | - |
| | 1440-1440 | 2.05290 | 1.2179 | 3.43876 | 1.49211 | - | - |
| | 1440-2880 | 2.11828 | 1.24669 | 1.94494 | 1.20526 | - | - |
| Dataset | Task | ConvLSTM | | PredRNN | | PredRNN++ | |
|:-------:|:---------:|:---------:|:---------:|--------:|:---------:|:----------:|:---------:|
| | | MSE | MAE | MSE | MAE | MSE | MAE |
| ETTh1 | 168-168 | 1.11838 | 0.95673 | 0.20783 | 0.36933 | 0.19706 | 0.35199 |
| | 168-336 | 0.42289 | 0.53454 | 0.60663 | 0.6639 | 0.18579 | 0.34186 |
| | 336-336 | 0.28599 | 0.43453 | 0.90852 | 0.83825 | 0.23011 | 0.38086 |
| | 336-720 | 0.18302 | 0.35023 | 0.29477 | 0.44067 | 0.28792 | 0.41855 |
| | 720-720 | 0.21331 | 0.36852 | 0.92726 | 0.87761 | 0.34586 | 0.56421 |
| | 720-1440 | 0.26016 | 0.40767 | 0.2097 | 0.35739 | - | - |
| | 1440-1440 | 0.46201 | 0.55664 | 0.25375 | 0.39662 | - | - |
| | 1440-2880 | 0.68657 | 0.73995 | - | - | - | - |
| Dataset | Task | ConvLSTM | | PredRNN | | PredRNN++ | |
|:-------:|:---------:|:---------:|:---------:|--------:|:---------:|:----------:|:---------:|
| | | MSE | MAE | MSE | MAE | MSE | MAE |
| ETTh2 | 168-168 | 0.67911 | 0.67424 | 2.71814 | 1.24722 | 0.46365 | 0.53826 |
| | 168-336 | 0.57245 | 0.6036 | 7.0023 | 1.94492 | 0.50444 | 0.56156 |
| | 336-336 | 0.43214 | 0.52404 | 1.21141 | 0.90117 | 0.53269 | 0.57922 |
| | 336-720 | 0.44047 | 0.52043 | 0.77007 | 0.71523 | 0.59645 | 0.61303 |
| | 720-720 | 0.61122 | 0.61653 | 0.68664 | 0.66567 | 0.65089 | 0.65267 |
| | 720-1440 | 0.68432 | 0.67343 | 0.5682 | 0.63012 | 0.52903 | 0.59123 |
| | 1440-1440 | 0.69629 | 0.67345 | 0.84719 | 0.77302 | - | - |
| | 1440-2880 | 0.49551 | 0.57418 | 0.81936 | 0.7682 | - | - |
| Dataset | Task | ConvLSTM | | PredRNN | | PredRNN++ | |
|:-------:|:---------:|:---------:|:---------:|--------:|:---------:|:----------:|:---------:|
| | | MSE | MAE | MSE | MAE | MSE | MAE |
| WTH | 168-168 | 0.23746 | 0.37102 | 0.70020 | 0.66288 | 0.32711 | 0.43675 |
| | 168-336 | 0.27096 | 0.40337 | 0.96749 | 0.77150 | 0.40391 | 0.48547 |
| | 336-336 | 0.30900 | 0.42290 | 0.77884 | 0.70297 | 0.40797 | 0.49392 |
| | 336-720 | 0.39303 | 0.49367 | 0.77279 | 0.70146 | 0.50707 | 0.56308 |
| | 720-720 | 0.40034 | 0.47207 | 0.72229 | 0.68889 | 0.61416 | 0.65392 |
| | 720-1440 | 0.46196 | 0.55477 | 0.61551 | 0.6625 | - | - |
| | 1440-1440 | 0.39681 | 0.48828 | 0.7735 | 0.74801 | - | - |
| | 1440-2880 | 0.45973 | 0.54664 | - | - | - | - |
Among them, "-" represents the cases of gradient explosion/vanishing. Specifically, during the training process, the loss value may become NaN.
# 2 The Supplementary Forecasting Cases
We randomly selected an additional four forecasting cases to compare the performance of WITRAN, FiLM, and Pyraformer on the 168-168 task of the ECL dataset. For more details, please refer to Figure A in the newly submitted PDF.
# 3 The MAE Metric of Figure 11 to Figure 31
We have calculated the metrics for different methods in each case, more details can be found in Table A in the PDF.
# 4 The Supplementary Robustness Experiments
We have included the table for this section in a separate PDF file. Please refer to Table B in the PDF for specific results.
Pdf: /pdf/a70fddba05cbfd697bff9ab477767023d8e60e96.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Molecule Joint Auto-Encoding: Trajectory Pretraining with 2D and 3D Diffusion | Accept (poster) | Summary: The paper introduces a pretraining method for molecule joint auto-encoding (MoleculeJAE) for 2D molecular topology and 3D molecular geometry. Their approach adopts SE(3) symmetry and is trained by fitting the joint distribution of the trajectories from the forward process of the diffusion model.
The authors treat the 2D molecular structure and 3D geometry as continuous objects, which are perturbed by Gaussian noise in the diffusion process. Unlike traditional diffusion models that denoise data by marginal distribution modeling, their 'auto-encoding' approach focuses on learning the joint distribution of data pair $(x_0, x_t)$. The overall task is then optimized by two objective functions, including a reconstruction loss and a contrastive loss, as well as a surrogate model.
Experimentally, MoleculeJAE is pre-trained on the PCQM4Mv2 dataset and is used to perform downstream tasks on QM9 and MD17 datasets. The experimental results are competitive on property prediction and dynamics prediction tasks.
Overall, this work is well-motivated and novel, with a theoretical justification for their approach. This paper presents good quality work.
Strengths: First of all, this work is well-motivated, as jointly learning 2D and 3D molecular representations by diffusion models is still at the stage of development. MoleculeJAE can learn both 2D bond topology and 3D conformation geometry information and is designed to respect the SE(3) symmetry of molecule data and is trained by fitting the joint distribution of the data’s augmented trajectories extracted from the forward process of the diffusion model.
It is also novel that the joint 2D and 3D learning of diffusion models is trained in a self-supervised learning manner and optimized by a contrastive learning objective function. MoleculeJAE unifies both contrastive and generative learning approaches from a trajectory perspective, providing a versatile and powerful molecular representation that can be applied to various downstream applications.
Unlike traditional diffusion modeling, MoleculeJAE uses a contrastive learning paradigm to fit the trajectory of pairwise molecules, while also utilizing the reconstructive task to perform denoising for each individual molecule. By using contrastive learning, it allows the model to align the augmented views of the same data and simultaneously contrast the augmented views of different data. Also, ablation studies show the importance to have a contrastive loss in the task.
Weaknesses: For methodology, the authors treat the 2D graph topology as a continuous object that is perturbed by Gaussian noise. I understand it is easier and reasonable to put the three things, atom features, bonds, and coordinates, under a single SDE framework. However, following previous works [1][2], describing the 2D graph topology as a discrete object with a discrete diffusion process is more reasonable, and their models show superior results.
Experimentally, MoleculeJAE shows better results with only small margins. However, the authors do not compare the number of model parameters, the overall training time, and the error bars (standard deviation) with existing methods, so it is unclear to me how well MoleculeJAE surpasses those methods.
[1] Vignac, Clement, et al. "DiGress: Discrete Denoising diffusion for graph generation." arXiv preprint arXiv:2209.14734 (2022).
[2] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation." arXiv preprint arXiv:2304.14621 (2023).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Jointly learning 2D and 3D molecular data is not completely underdeveloped. Previous works on diffusion models, [1][2], have shown that diffusion models can be used to learn 2D and 3D latent molecular representations effectively jointly. They perform better on molecular generation tasks than single-modal diffusion models, and it is proper to cite these works and mention the difference.
2. I wonder if the authors consider extending this auto-encoding framework to discrete graph topology, as a discrete representation and diffusion process is more reasonable for graph-like objects. If not, I would like to hear the reason.
3. I would like to see the error bars (standard deviation), the number of model parameters, and the total training time compared with different methods.
[1] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation." arXiv preprint arXiv:2304.14621 (2023).
[2] Vignac, Clement, et al. "Midi: Mixed graph and 3d denoising diffusion for molecule generation." arXiv preprint arXiv:2302.09048 (2023).
Typo:
1. Line 257, the log-likelihood logp is also achieves $\rightarrow$ the log-likelihood logp also achieves
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable feedback. Certain aspects of your concerns related to motivation and experiments have also been covered in the general response. Please also review the comprehensive response provided there for further clarifications.
**Weaknesses**
A:Thank you for your insightful comments regarding our methodology and the referenced works. As we have outlined in section 2.1 and the appendix, our approach is indeed compatible with discrete diffusions [1] [2]. Moreover, we have extended our 3D structure modeling to incorporate the discrete diffusion framework. For detailed information, kindly refer to Appendix A.3. We also want to emphsis that our framework is mainly for unsupervised representation learning, as a result, certain related works focused exclusively on generation tasks have been inadvertently excluded. We will add the two references as you suggested.
In response to your concerns about experiments, we have introduced a **new generation downstream task grounded in discrete diffusions**, which serves as a supplement to our property prediction downstream task. For further details, please consult the last part of the general response.
**Question 1**
A: We will discuss the relation between the mentioned papers and our method in the revised version. Notably, our approach introduces a novel unsupervised molecule representation learning framework centered around trajectory modeling. This contrasts with the focus of [1] and [2], which primarily concentrate on generation tasks.
**Question 2**
A: Your inquiry about extending our auto-encoding framework to discrete graph topology is astute. We have indeed considered this extension, and Appendix A.3 provides comprehensive details. From a methodology standpoint, our approach inherently encompasses discrete trajectories, akin to the formulation in [1] and [2]. Furthermore, an additional experiment addressing discrete generation has been performed in the **general response**.
**Question 3**
A: Thank you for raising these questions, and they are important details. We have illustrated the details below.
1. On the standard deviations on geometric property prediction tasks, we want to kindly point out that the computational resources for running such standard deviations are huge. Concretely, SchNet is by far the fastest geometric model (in comparison to PaiNN and Equiformer), and it may take around 12 hours for one task. We have 13 (pretraining algorithms) * 12 (tasks) * 12 (GPU hours) * 5 (seeds) = 9.36K GPU hours or 290 GPU days. Such computational cost is huge and impossible for us to add during the rebuttal period. This is also why the existing geometric pretraining methods (as the baselines in this work) do not include the standard deviations.
2. The model parameters depend on the backbone model. Roughly speaking, the numer of parameters is the sum of backbone model, the projection head, and the MLP for the time-embedding. For the experiment in the paper, the number of model parameters is 1.353M.
3. The training time for different pretraining algorithms are listed below:
| Pretraining Algorithm | min / epoch |
| -- | -- |
| Type Prediction | 8 |
| Distance Prediction | 7 |
| Angle Prediction | 8 |
| GeoSSL-RR | 10 |
| GeoSSL-InfoNCE | 10 |
| GeoSSL-EBM-NCE | 11 |
| GeoSSL-DDM-1L | 12 |
| GeoSSL-DDM | 19 |
| 3D InfoMax | 9 |
| GraphMVP | 11 |
| Molecule JAE- light | 17 |
| Molecule JAE | 20 |
Here, we take all possible ingredients of Molecule JAE (see Fig. 2) into account, and the diffusion steps are also taken to be the maximal. **Reducing the noise schedule of the diffusion model in Molecule JAE will further decrease the pretraining time** (Molecule JAE- light for non-generative downstream tasks).
**Typos**: Thank you for highlighting the typos. We will rectify the mentioned error in the revised version.
[1] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation."
[2] Vignac, Clement, et al. "Midi: Mixed graph and 3d denoising diffusion for molecule generation."
---
Rebuttal Comment 1.1:
Title: Response by Reviewer
Comment: I thank the authors for their response. I will keep my score for a borderline accept. I hope the authors can properly cite and mention [1][2] in their revised version.
[1] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation."
[2] Vignac, Clement, et al. "Midi: Mixed graph and 3d denoising diffusion for molecule generation."
---
Reply to Comment 1.1.1:
Title: Response from authors
Comment: Thank you for your feedback and for taking the time to consider our revisions. As we still have time within the discussion period, if there are particular aspects of our paper that you feel we should concentrate on or enhance further, we are willing to incorporate your further suggestions. | Summary: The paper proposes MoleculeJAE, an auto-encoder for both 2D and 3D molecule diffusion trajectories. The model learns the trajectories jointly in a self-supervised manner. Empirically, MoleculeJAE achieves competitive results on property and force prediction benchmarks.
Strengths: The joint diffusion of 2D and 3D is novel. It is widely concerned that point cloud diffusion is less aware of 2D information. I admire the authors' contribution to this problem. I would also thank the authors for providing a detailed theoretical analysis.
Weaknesses: 1. I did not really get the motivation of modeling diffusion trajectories. Personally, I think the intermediate states are less informative than x_0. Could you please further explain your idea, and provide some additional ablation results if possible, for it seems that the current ablation study shows that the role of the contrastive loss is not very significant in many cases? Thanks.
2. I believe the baselines of QM9 experiments are not state-of-the-art. Please consider this work: https://openreview.net/forum?id=tYIMtogyee
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: There is another work that introduced timestep embeddings in their diffusion process: https://ojs.aaai.org/index.php/AAAI/article/view/25639/25411 What are the advantages of contrastive learning?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and valuable feedback. Certain aspects of your concerns related to motivation and experiments have also been covered in the general response. Please also review the comprehensive response provided there for further clarifications.
**Weaknesses: Motivation of Modeling Diffusion Trajectories and contrastive learning**
1. Motivation: In Sections 2.1, 3.2, and 3.3, we furnish an in-depth understanding of our motivation. To expound briefly, Section 2.1 introduces a novel trajectory-based approach, inspired by real molecule dynamics and diffusion generative models, to serve as continuous data augmentation. We subsequently adhere to the established protocol of unsupervised representation learning, aiming to deduce a learning objective from the augmented trajectory data distribution. Our focus centers on modeling the collective (joint) distribution of these equivariant trajectories, ultimately culminating in a fusion of a distinct contrastive element and generative learning objectives, as outlined in Section 3.3. The theoretical underpinning is furnished in Section 3.2.
Regarding your concern about the **informativeness of intermediate states**, we have addressed this aspect in Lines 95-96, highlighting that for small t values, xt remains proximal to x_0. This phenomenon also informs our decision to abstain from inducing white noise typical in diffusion generative models (in experiments, we keep the noise schedule to be small). When the trajectory aligns closely with the original data, we consider it an augmentation of the initial dataset (just like Gaussian bluring and color distrotion are treated as augmentation of CV data). On the other hand, scine the trajectory is continuous, each amount of transformation between two successive steps is small, and we care more on the correlation of successive steps.
2. We acknowledge your observation regarding the incremental significance of the **contrastive loss** in our ablation study. We believe adding more optimization tricks customized for contrastive learning (like adding the momentum update, EMA) may improve the abaltion results. The relevant experiments are currently in progress. However, we want to emphsis that we have theoretically demonstrated that the **joint distribution** of the trajectories contain more information than **marginal distributions**, and we utilized contrastive learning to extract the additional information for achieving better molecular representation.
**Weaknesses: State-of-the-Art Baselines**
Res: We appreciate your recommendation to include state-of-the-art baselines in our QM9 experiments. In Line 290, we emphasize the versatility of our pretraining framework with respect to geometric GNN backbones. Our approach **encompasses a fair testing protocol where the backbone remains constant (Schnet), while diverse pretraining tasks are explored**, as evident in our experimental section. In response to your suggestion, we have already integrated the pretraining framework in https://openreview.net/forum?id=tYIMtogyee into our experimentation (marked as **GeoSSL-DDM-1L**). Please see the general response (point 2 of the experiments part) for more details.
**Question on a related work**
Res: We will add the related work (https://ojs.aaai.org/index.php/AAAI/article/view/25639/25411) involving timestep embeddings in the diffusion process. However, we wish to highlight a fundamental distinction between our representation learning context and the generative model focus of the referenced work. Our trajectory distribution is harnessed as an augmented data distribution, resulting in the derivation of a **combined contrastive and generative approach** from the joint trajectory distribution, as expounded in our learning objectives. While contrasting with prior works, we also emphasize the intrinsic value of contrastive learning as a way of extracting the additional information from the joint ditribution rather than the marginal distributions (**which cannot been down by just introducing timestep embeddings**), substantiating its necessity theoretically and through ablation studies.
Sidenote: please see the general response for an additional downstream task of jointly 2D and 3D generation.
---
Rebuttal Comment 1.1:
Title: Thank the authors for the rebuttal
Comment: I think my concerns are well-clarified by the authors. I am updating my rating from 4 to 6.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thanks for your quick response. I sincerely appreciate your time and effort in re-evaluating our work. | Summary: This paper proposes a new representation learning method for molecules using 2D and 3D structures. The joint distribution between original molecules and augmented molecules is decomposed into reconstructive and contrastive tasks. The proposed model, MolecularJAE, simultaneously tackles both tasks with the help of SE(3) equivariant GNN models. MolecularJAE is evaluated on quantum property prediction and molecular dynamics prediction and achieves a competitive result against baseline models.
Strengths: - Diffusion over two different modalities hasn't been explored as far as I know.
- The decomposition of joint distribution into reconstructive and contrastive tasks is new and interesting.
Weaknesses: - It is known that the 3D structure of a molecule follows certain physical rules. The diffusion process used in this paper does not account for the prior knowledge from the domain.
- The performance of the proposed model is worse than some recently proposed models, which hasn't been included in the experiment section. For example, the performance of Equiformer[1] and PaiNN[2] on QM9 is better than the proposed model across almost all tasks.
- The presentation of this paper can be improved further. For example, Figure 1 does not provide any meaningful information to readers. Figure 2 is also difficult to digest. Polishing these figures with additional details and proper explanations would improve the accessibility of the manuscript.
- There are too many typos throughout the main text and appendix.
[1] Liao, Yi-Lun, and Tess Smidt. "Equiformer: Equivariant graph attention transformer for 3d atomistic graphs." ICLR 2023.
[2] Schütt, Kristof, Oliver Unke, and Michael Gastegger. "Equivariant message passing for the prediction of tensorial properties and molecular spectra." ICML 2021.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Why does the energy function formulation (Eq 7) reformulate the maximum likelihood estimator to the one above line 196, even though the normalizing constant depends on theta?
- The reconstructive task with score matching is advertised as a tool for generating new samples (line 222). Are there any experiments on the generative perspective? Given that the gain from the contrastive task is not significant in the experiments, the score-matching model seems well modeling the underlying distribution of graphs.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I couldn't find any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our sincere gratitude for your meticulous examination and insightful feedback. Now we want to address your concerns and clarify misunderstandings in detail.
**Weaknesses**
1. **Incorporating Physical Rules**: We appreciate your suggestion to integrate such knowledge into our diffusion process. However, we must acknowledge the practical challenge in precisely determining the drift and diffusion coefficients for Eq.6. We've explored the avenue of parameterizing these coefficients, as per a flexible framework that paramterizing general physical diffusions [1], yet found the resultant improvement to be marginal. In response, we intend to incorporate the pertinent experimental outcomes into the appendix as a releted emperical study.
2. **Performance Analysis**: In light of your observation, we emphasize our flexibility in employing various geometric GNN backbone models, as articulated in Line 290. Our pretraining framework operates agnostically to the choice of backbone models, including PaiNN and Equiformer. We contend that **evaluating a pretraining method fairly necessitates maintaining a fixed backbone model while testing diverse pretraining frameworks**, as demonstrated in our experimental section. While we are indeed conducting experiments with a PAINN backbone model [2] as per your suggestion (see the general response), we are confident that our current experiments substantiate the efficacy of our approach.
3. **Enhancing Clarity of figures and Correcting Typos**: We recognize your concerns regarding Figure 1 and Figure 2. We perceive Figure 1 as illustrative of our novel trajectory modeling pipeline, encompassing both contrastive and denoising elements. Similarly, Figure 2 elucidates our novel joint 2D and 3D information encoding while preserving symmetry.
We appreciate your feedback on typographical errors and will address them in line with other reviewers' suggestions. To expedite this process, we kindly request the specific locations of these errors.
**Questions**
1. We appreciate your astute observation. To address this, we will incorporate the expectation of the normalization constant $E_x(Z_{\theta}(x))$ into Eq. 7 to accurately represent the energy function formulation. Note that this factor will disappear in Eq. 8, therefore doesn't affect the the correctness of the rest of the argument.
2. We've indeed explored the generative perspective through experiments in our ablation study. Moreover, we perform an additional generative task to our pretrained representation. As can be seen in Fig.2, the right-hand side of the model's output is ready for downstream generation tasks. Please consult the experiment section of the general response for details.
[1] "A flexible diffusion model" ICML 2023
[2] "Equivariant message passing for the prediction of tensorial properties and molecular spectra" ICML 2021.
---
Rebuttal 2:
Title: Gentle Reminder
Comment: Hope this message finds you well. We have carefully addressed your questions and have incorporated additional generative experiments (in the general response) based on your valuable advice. We are eager to know if our response has effectively addressed the concerns you raised in your initial review. Should you need further clarification or have any additional points you would like us to consider, please do not hesitate to share your thoughts. We are committed to ensuring that all your concerns are fully addressed.
Thanks!
---
Rebuttal Comment 2.1:
Comment: I appreciate your comprehensive response regarding my questions and concerns. Based on the rebuttal, I think my biggest concern about the paper on the experimental performance has been properly addressed. I hope the authors add the new results with missing details to the revised manuscript to make it more concrete. In this regard, I am happy to raise my original score.
---
Reply to Comment 2.1.1:
Title: Thanks
Comment: We sincerely appreciate your time and effort in re-evaluating our work. We will make the suggested revisions as promised in our updated manuscript. | Summary: The authors propose an auto-encoding method for learning molecular embeddings from both 3D and 2D information jointly. The method is loosely related to diffusion methods in that embeddings of data augmentation trajectories are learned via a score-based reconstruction loss and contrastive loss. The embeddings give SOTA performance on a number of QM and activity prediction tasks.
Strengths: ### Originality
The method is original in that it encodes both 3D and 2D molecular information via learning to embed data augmentation trajectories.
### Quality
The results indicate that the resulting embeddings are SOTA.
### Clarity
The paper could be made much clearer (see Weaknesses).
### Significance
Optimal molecular embeddings are a perennial desire for all kinds of downstream molecular tasks. The SOTA performance of these embeddings suggests that this method represents a significant contribution.
Weaknesses: - Section 2 Background is a meandering presentation of a number of related ideas, some of which are not immediately relevant for the main paper. Those should be moved to the appendix, and the remainder should be made much crisper so that it is clear to the reader how the remaining sections will proceed.
- the unnumbered equation after Eq. 4 is missing an $e_m$.
- It is not at all clear what data augmentations are used in this paper. E.g. what are $\mu_1$ and $\mu_2$ in Eq. 6?
- The presentation of the MLE in Section 3.2 is unnecessarily confusing. There is no need to introduce energy-based models, Eq. 9 can be derived without the energy functions. Note that the energy functions are never referred to again. The only thing is the surrogate gaussian KDE, but again that does not need to be motivated by energy functions.
Typos:
- The statement $E_θ(x0) = \log q_θ(x_0)$ is incorrect since it is missing a $\log Z_\theta$.
- Most of the score expressions in section 3.2 are missing a $\log$.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The unnumbered equation after Eq. 5 does not make sense - how does one apply a rigid transformation to time t?
- How are the results in Tables 1 and 2 obtained? What model is trained on the learned embeddings?
- What is the intuition behind this entire paper? Why does embedding a trajectory give better performance than embedding just the stationary data? How do these results depend on the choice of trajectories?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Some limitations are addressed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our sincere gratitude for your meticulous examination and insightful feedback.
**Weaknesses: Organization of Section 2 and Section 3.2 and Equation Missing $e_m$** : To rectify this, we will enhance the clarity of Section 2 by trimming the general introduction of the diffusion mechanism . Our intention is to directly establish a linkage between our equivariant trajectory approach and the classical data augmentation technique, employing the heat and discrete (cold) diffusion equations (3) and (4). The terminology "energy" at line 195-196 is employed solely to represent the logarithm of the probability density function, a convention consistent with Bayesian optimization literature.
To response your concern on introducing the general energy based models, We will streamline the MLE exposition in Section 3.2, omitting the energy-based models' introduction for Equation 9 derivation.
Furthermore, to optimize space usage, we will introduce a comprehensive roadmap at the conclusion of the introduction. This addition will illuminate the forthcoming sections' structure, easing the reader's navigation through our paper.
**Clarification on Data Augmentations in Eq. 6**: We deeply appreciate your scrutiny. We commit to providing unequivocal clarity by explicitly stating that $\mu(x,t) = -x$ for both $x=E(t)$ and $H(t)$ in Eq. 6. We also use a discrete equation of $E(t)$ and $H(t)$ (formula is given in appendix) for an additional gneration task suggested by other reviewers. Please concult the general response.
**Addressing Typos on missing $\log$**: We will delete the statement on $E_{\theta}(x_0)$. We want to emphasis that the rest of the argument remains valid due to the cancellation of $Z_{\theta}$ by conditional probability definition. The assertion regarding missing "score" expressions in Section 3.2 seems a misunderstanding, since no "score' appeared in section 3.2.
**Questions**:
Q1: The unnumbered equation following Eq. 5 will be rectified: t will be replaced with $x_{\text{3D}}(t)$.
Q2: For results in Tables 1 & 2, they are obtained by re-running standard pretraining-and-finetuning results (the references are given in Line 295-304) in one V-100 GPU. Specifically:
- In the pretraining stage, we pretrain an encoder (e.g., the equivariant encoder in Figure 1) using various pretraining algorithms.
- In the finetuning stage, we add a linear prediction head on the pretrained encoder and then take an end-to-end finetuning.
Q3: The paper's intuition is outlined in sections 2.1, 3.2, and 3.3. Essentially, we propose trajectory utilization (inspired by real molecule dynamics and diffusion generative models) for continuous data augmentation. Via standard unsupervised representation learning, we extract learning objectives from augmented data distributions. Our model captures joint trajectory distributions, culminating in a combination of contrastive and generative learning objectives in section 3.3, supported theoretically in section 3.2. As to the question of "How do these results depend on the choice of trajectories", we recognize the merit in investigating this avenue further. It aligns intriguingly with the exploration of whether a **composition of diverse data augmentations (trajectory-based or not)** can enhance representation learning. Consequently, we intend to address this pertinent question in our upcoming "Future Outlook" section.
---
Rebuttal 2:
Title: A Kind Reminder
Comment: I hope this message finds you in good health. I wanted to take a moment to express our sincere gratitude for your positive evaluation of the novelty and methodology of our paper, as well as your insightful suggestions for improving our presentation. Your feedback has been invaluable, not only in addressing critical typos that play a crucial role in ensuring readers' understanding of our method, but also in shaping the overall organization of our revisions.
We have carefully considered your feedback and have made efforts to address the concerns you raised. As we approach the conclusion of the discussion period, we are eager to ensure that our response effectively addresses your concerns and aligns with your expectations. If there are any lingering questions or if you require further clarification on any aspect of our work, please don't hesitate to reach out to us. We are committed to ensuring that your concerns are fully addressed before the deadline. | Rebuttal 1:
Rebuttal: ## General Response
We thank all the reviewers for their time, and valuable feedback for improvements. All relevant works and typos mentioned by reviewers will be discussed in the revised version. Now, we clarify and address some common issues that have been raised by the reviewers.
**Motivation and intuition** (Reviewer LUAH, s6nt, oitq): We appreciate the opportunity to clarify the motivation and intuition behind our approach. In **Section 2.1**, we introduced the concept of utilizing trajectories as continuous data augmentation, drawing inspiration from real molecule dynamics and diffusion generative models. Following the standard procedure of unsupervised representation learning, we derived a learning objective from the augmented data distribution. Our methodology involves modeling the joint distribution of trajectories (as elaborated in **Section 3.2**), which culminates in a novel combination of contrastive and generative learning objectives, outlined in **Section 3.3**. In conclusion, **we learn a powerful representation by fitting the joint distribution of augmented equivariant trajectories which contain both 2D and 3D molecular information**. We hope this clarification better emphasizes the foundation and significance of our proposed approach.
**Paper organization** (LUAH, fgxq): To enhance clarity, we will relocate certain content from the **background section** to the appendix, ensuring a smoother flow for readers. Additionally, we acknowledge the importance of providing a clear **roadmap** for our work. In response, we will incorporate a comprehensive roadmap in the final paragraph of our introduction. Furthermore, more detailed illustrations of the **figures** will be given within the main text.
**Experiments** (fgxq, s6Nt, oitq): A fundamental misunderstanding is that our primary aim is to present a comprehensive unsupervised molecule representation learning framework rooted in equivariant symmetric trajectories. Therefore, the most straightforward way to demonstrate our method's effectiveness is to **fix an equivariant backbone neural network** (we choose Schnet), and test if the representation trained by our method can beat other representation learning methods. In contrast, some **experiments suggested by reviewers (e.g., comparing our QM9 experimental results to PaiNN[1], GNS-TAT[2]) involve different research settings**, and we believe our core experiments have effectively showcased the efficacy of our approach. To further address your suggestions, we have conducted three additional experiments (** only partial results have been shown due to severe GPU shortage in the rebuttal period**).
1. We performed experiments **replacing the Schnet backbone with PaiNN** on certain QM9 tasks, and our MoleculeJAE achieved state-of-the-art performance. **The table is given in the attached PDF**.
2. We want to highlight that this work [2] comprises a backbone model (GNS-TAT) and a pretraining algorithm (GeoSSL-DDM-1L). (1) The default GNS-TAT in [2] is not an equivariant neural network. (2) **We have already experimented GeoSSL-DDM-1L**. As shown in Tables 1 & 2 on page 8, we can observe that MoleculeJAE is better than GeoSSL-DDM-1L.
3. Reviewer oitq suggests us to test if our **pretrained representation performs well for generation task**. We apply our generalized (discrete) framework in Appendix A.3 for a joint 2D and 3D structure generation (the detailed setting is given in [3]), revealing significant enhancements over non-pretrained models:
Table 1: Results on GEOM-Drugs with explicit hydrogens. We pretrained our model MoleculeJAE on PCQM4Mv2 for 24 epochs and then finetuned the generative head (see Figure 2) of MoleculeJAE on GEOM-Drugs for 211 epochs. As the table shows, MoleculeJAE achieves SOTA results on 9 out of 10 metrics.
|Model|Mol stable↑|Atom stable↑|Validity↑|Unique↑|AtomTV↓|BondTV↓|ValW1↓|Bond Lengths W1↓|Bond Angles W1↓|
| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |
|EDM (3000 epoch)|5.5|92.9|34.8|100.0|0.212|0.049|0.112|0.002|6.23|
|MiDi (2D+3D, 3000 epoch)|69.2|99.0|67.4|100.0|0.059|0.024|0.036|0.012|5.47|
|MoleculeJAE (Pretrained on PCQM4Mv2)|84.5|99.6|79.7|100.0|0.059|0.021|0.008|0.003|2.16|
We believe these supplementary experiments underscore the robustness and versatility of our approach.
[1] Equivariant message passing for the prediction of tensorial properties and molecular spectra
[2] Pre-training via Denoising for Molecular Property Prediction
[3] Midi: Mixed graph and 3d denoising diffusion for molecule generation
Pdf: /pdf/00baf1bd85aac66e1b1daf11bf47f9d687ca6a1c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Color Equivariant Convolutional Networks | Accept (poster) | Summary: This paper questions the importance of color variations for neural classifiers and proposes to use color-equivariant architectures in the case of unbalanced datasets.
To demonstrate the validity of the presented approach, the authors conduct experiments both on synthetic controlled datasets and on common object recognition benchmarks.
As the experiments show, the injection of color-equivariant layers leads to a slight improvement on almost all common benchmarks when the performance is measured on the original test set but the advantage of the presented method becomes more evident when the test data is corrupted with hue shifts.
Strengths: This paper studies an interesting and underinvestigated question of the importance of color representation for neural networks.
The submission is easy to read, and the motivation is well explained in the example of the Flowers dataset.
The authors have conducted a significant number of experiments to support their claims.
Additional strength is that the demonstrated performance improvement is achieved without increasing the number of trainable parameters (line 238).
Weaknesses: 1. While the idea of extending equivariance from geometric to photometric transformations is definitely interesting, the submitted manuscript, unfortunately, focuses on the only type of such transformations, i.e. hue shifts. Despite the case of the Flowers dataset is a perfect fit for this transformation, the authors do not discuss other use cases when this type of equivariance may be interesting in practice and just mention "accidental recording conditions" (line 3). For other datasets, hue shifts seem less meaningful, and the better robustness of the proposed CE-ResNets to such shifts at test time is explained by the fact the architecture was just intentionally designed for this scenario. Taking this into account, I find the scope of the paper a bit limited.
1. In addition to being limited in the number of considered photometric transformations, the paper also considers a single task of object recognition. I would encourage the authors to consider other tasks as well, e.g. unsupervised domain adaptation.
1. While the authors claim their approach makes networks more robust to test time corruptions (Tab. 1), they do not demonstrate other baselines aiming to provide robust outputs, e.g. adversarially robust models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I ask the authors to discuss the weaknesses stated above. The main thing I am interested in is the usage of robust baseline models.
1. While preserving the number of trainable parameters is valuable, I wonder if the network throughput remains the same for color-equivariant architectures in comparison with the baselines. If this is not the case, how significant is the decrease?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer w9HG for the helpful remarks and interest in our work. Please find a detailed response to your questions and suggestions below:
---
**Weaknesses:**
> 1. While the idea of extending equivariance from geometric to photometric transformations is definitely interesting, the submitted manuscript, unfortunately, focuses on the only type of such transformations, i.e. hue shifts. [...] Taking this into account, I find the scope of the paper a bit limited.
We regret that we did not clarify this better. A hue-shift is truly fundamental: simply changing the light bulb or the time of day yields a different illumunant and thus a hue-shift. Being robust to hue-shifts is a still unsolved, important, and an active research direction (color constancy). See, eg, the following recent (top venue) publications below.
We will add these citations and motivation to related work.
- Mahmoud and Brown. "What else can fool deep learning? Addressing color constancy errors on deep neural network performance." ICCV. 2019.
- Li, Bing, et al. "Ranking-based color constancy with limited training samples." IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
- Ono, Taishi, et al. "Degree-of-linear-polarization-based color constancy." CVPR. 2022.
> 2. In addition to being limited in the number of considered photometric transformations, the paper also considers a single task of object recognition. I would encourage the authors to consider other tasks as well, e.g. unsupervised domain adaptation.
This is true. Unfortunately, given the limited time of the rebuttal, we were not able to run additional tasks.
Please also see our answer to Reviewer gSRU: We chose for image classification because classification models, including the tested ResNets, are also commonly used as backbones for other applications. While we acknowledge that additional experiments on extra applications would strengthen the argument for generalization of our method, we instead opted, given limited compute resources, for a more thorough evaluation on multiple classification datasets.
> 3. While the authors claim their approach makes networks more robust to test time corruptions (Tab. 1), they do not demonstrate other baselines aiming to provide robust outputs, e.g. adversarially robust models.
Agreed. Additional comparisons with two baselines, CIConv and AugMix, can be found in the general author rebuttal, and will also be included in the paper.
---
**Questions:**
> 1. I ask the authors to discuss the weaknesses stated above. The main thing I am interested in is the usage of robust baseline models.
See above.
> 2. While preserving the number of trainable parameters is valuable, I wonder if the network throughput remains the same for color-equivariant architectures in comparison with the baselines. If this is not the case, how significant is the decrease?
Definitely. We do already discuss some compute efficiency in the Implementation section of the paper where the same number of channels, CEConv performs a factor $\frac{|H_n|^2}{k^2} + |H_n|$ more MACs. However, the true MAC increase is lower since the number of channels in a CEConv network is reduced to maintain the same parameter count as the baseline.
Unfortunately, given the limited time for the rebuttal, we could not yet provide these numbers in the rebuttal, but we will include these measurements in the final paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed feedback and provided additional results. After reading other reviews as well as authors' comments, I tend to slightly raise the score. I strongly encourage the authors to add all the promised experiments to the updated manuscript. | Summary: The authors introduce a color equivariant convolutional neural network. To achieve this the authors represent the image in HSV format, and achieve hue equivariance using methods for rotational equivariance. This is possible since hue can be represented by an angle. The authors show that the proposed approach out performs standard CNNs and color invariant CNNs when there is a hue shift between the train and test set.
Strengths: * Originality: The presented method of building a color equivariant CNN appears to be original.
* Quality: The work appears to be of fairly good quality.
* Clarity: The paper is well written.
* Significance: The observation that color equivariance can be achieved by identifying hue with the rotation group is interesting. The results show the proposed approach leads to improved performance when there is a color based domain shift.
Weaknesses: * Quality: I have some questions about the mathematical presentation, and experiment design (see questions).
* Clarity: Some aspects were unclear to me, due to presentation or motivation (see questions)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * regarding equation 3. I believe the correlation should be between the feature maps and $C^{l+1}$ filters [7]. I think it could be clarifying to write that $c$ is the input channel and $i$ is output channel.
* regarding equation 6: I'm not sure I understand how does H_n(k) acts on \psi_c? In the case of group 2D rotation equivariance, the filters are transformed by in plane rotations. It is not clear to me how one can perform a 3D rotation of a filter with an arbitrary number of channels.
* It seems to me that the network is designed for local hue equivariance, can the authors clarify the benefit of this over global hue equivariance (i.e, performing hue shift on the input image then processing all inputs with the same cnn, and combining representations at the final layer to get a hue-equivariant representation)?
* Does computational expense improve if input images are converted to HSV before being passed into the network? In this case, I expect hue equivariance could be achieved by discretization of the 2D rotation group rather than a 3D transformation.
* Have the authors experimented with finer/coarser discretizations of the hue/rotation group
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * Limitations -- in particular the issue of computational cost -- are communicated
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Great thanks for your helpful comments and appreciation for our work. Please find detailed answers to your questions and remarks below:
---
**Questions:**
> 1. regarding equation 3. I believe the correlation should be between the feature maps and $C^{l+1}$ filters [7]. I think it could be clarifying to write that is the input channel and is output channel.
Thank you, this is indeed a mistake from our side. We will correct / clarify in the paper.
> 2. regarding equation 6: I'm not sure I understand how does H_n(k) acts on \psi_c? In the case of group 2D rotation equivariance, the filters are transformed by in plane rotations. It is not clear to me how one can perform a 3D rotation of a filter with an arbitrary number of channels.
Excellent question - as each "pixel" of a filter in a hidden layer lives in a high-dimensional space, applying 3D rotations is indeed not possible. 3D rotations are only applied at the input layer, whereas in hidden layers cyclic permutations are performed to retain equivariance throughout the network. We will better clarify this in the paper.
> 3. It seems to me that the network is designed for local hue equivariance, can the authors clarify the benefit of this over global hue equivariance (i.e, performing hue shift on the input image then processing all inputs with the same cnn, and combining representations at the final layer to get a hue-equivariant representation)?
Interesting alternative - we have briefly considered this, but initial experiments did not yield promising results and therefore did not further pursue this approach. The theoretical benefit of local over global hue invariance is that multiple objects in one image can be recognized invariantly in any combination of hues - this indeed appears to be a useful property. We will include a note on local vs. global equivariance in the Discussion.
> 4. Does computational expense improve if input images are converted to HSV before being passed into the network? In this case, I expect hue equivariance could be achieved by discretization of the 2D rotation group rather than a 3D transformation.
Thanks, we have indeed considered this option. Converting an input to the HSV space introduces a discontinuity between 359-0 degrees, which can lead to suboptimal results when used as an input to a neural network. Alternatively, color spaces such as LAB could be used, where a hue shift can indeed be modeled as a 2D rotation. Eventually we did not see any benefit in 2D rotations, as the additional compute of 3D rotations is negligible on the scale of deep NNs, and this way the network could work directly on RGB images.
> 5. Have the authors experimented with finer/coarser discretizations of the hue/rotation group?
This is indeed something we have looked at - in appendix D we provide an ablation study on the number of hue rotations, i.e. the discretization of the hue group. Figure 7 shows that increasing the number of rotations improves test-time hue shift performance. However, there is also a trade-off in model capacity as increasing the number of rotations increases the number of parameters in the model. This should be counteracted by reducing the width (number of channels) in the architecture to keep the number of parameters equal, thereby somewhat reducing model capacity.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. It seems reasonable to me that hue transformations for higher dimensional filters would need to be defined differently, but the solution the authors propose i.e., using a cyclic permutation, is not intuitive to me. Is the motivation for choosing a cyclic permutation that it is also orthogonal? How does it relate back to considerations of hue?
---
Reply to Comment 1.1.1:
Title: Response to Official Comment by Reviewer 9e3n
Comment: Dear reviewer 9e3n,
Thank you for your question. We will try to clarify the intuition behind our method. Group equivariant convolutions encode additional transformations (i.e. other than translations) in an extra feature map dimension $G$. In the case of Color Equivariant Convolutions, each index $g \in G$ represents one hue transformation, where a hue shift in the input results in a permutation in $G$. The cyclic permutation in the hidden layers therefore preserves the hue equivariance property in the network. This is consistent with the original GConvs [1], where a rotated input results in a rotation and cyclic permutation in the network feature maps. We will further clarify this in the text.
*References*
[1] Taco S. Cohen and Max Welling. Group equivariant convolutional networks. In Proceedings of the 33rd 312 International Conference on International Conference on Machine Learning - Volume 48, ICML’16, page 313 2990–2999. JMLR.org, 2016. | Summary: Paper proposes color-equivariant CNN layers by imposing equivariance to H_n (a discrete subgroup of SO(3)) in the RGB space which is imposes hue equivariance. Implementation follows the framework of Group-equivariant CNNs. Experiments show marginal improvements over standard CNNs for in-distribution test data but significant improvements when test data is hue-shifted.
Strengths: 1. Color equivariance in CNNs is a relatively less-studied but an important topic for robustness. The proposed idea of incorporating equivariance to hue transformations via rotations in the RGB space is novel.
2. Experiments are setup well clearly showing when color equivariance is helpful vs color invariance vs no symmetry. Proposed approach shows improves over CNN even when in in-distribution test data, but major improvements come when test data is hue-shifted.
Weaknesses: 1. Definition of color equivariance considered in the paper seems to be restricted as it only considers the hue dimension. One of the motivations for incorporating color equivariance is for robustness to illumination changes which I do not think is guaranteed here. A general definition of color-equivariance should consider other dimensions. Maybe the claims are better justified if Hue-equivariance is emphasized in the title/introduction/method name, etc.
2. The definition of hue-equivariance is not precise in the paper. Ideally, it should include all rotations in the RGB space (i.e., SO(3)), but also consider the fact that many of these rotations take the color values out of the RGB space (unit cube). In general, this issue occurs for the discrete subgroup $H_n$ as well. Simply projecting the color values back into the RGB space does not work as it breaks the invertibility property of these transformations.
3. Experiments compare with a standard CNN (+grayscale) as baseline. Other baselines can be included, for example [1], that considers invariance to illumination color/intensity.
4. Experiments in the main paper only consider the group $H_3$ (i.e., 3 rotations in the RGB space), which seems limited in robustness, as shown in Figure 1 without jitter augmentations.
References:
[1] Lengyel, Attila, et al. "Zero-shot day-night domain adaptation with a physics prior." _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 2021.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. How is the hyperparameter $s$ chosen for different datasets in Table 1, especially for hue-shifted test dataset (assumed not known during training/validation)?
2. Is the invariance captured by Grayscale-Z2CNN same as that considered in the paper, i.e., invariance to hue? Would a better preprocessing step be only considering a canonical representation in the hue dimension?
3. In the color-imbalance experiments, were the CNNs trained with loss weighted according to the class imbalance?
4. Except being computationally expensive, are there other issues with using $H_n$ for higher n? Ablation in the Appendix seem to show better robustness with higher n, and also good in-distribution performance (at 0 hue shift). So I am not sure why $H_3$ alone was considered in the main paper.
5. It would be helpful to add a few lines of summary on the ablation studies in Sec 4.2.
6. Can experiments in Section 4.1 include CNN+jitter as it seems to provide competitive performance in Section 4.2?
**AFTER REBUTTAL**
Authors have addressed most of my concerns, particularly, adjusted the claims regarding color vs hue equivariance and added relevant baselines. I am increasing my score as I think this paper can be an important addition to an understudied topic.
However, I am still concerned about the mathematical definition of hue-equivariance (via H_n group) since these transformations can map a color outside the unit cube and projecting back loses invertibility requirement of a group. I accept that this may not reduce performance in practical tasks (except certain inconsistencies pointed out by the authors), but I believe it is not mathematically correct.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations should discuss lack of robustness to other color dimensions (e.g., illumination).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the helpful comments, please find our detailed response to your questions and remarks below:
---
**Weaknesses:**
> 1. Definition of color equivariance considered in the paper seems to be restricted as it only considers the hue dimension. [...]
Yes! While hue is arguably the most fundamental property of color, others such as saturation and brightness are also important. We will refocus the title, introduction, and method sections to be on "hue-equivariance". In the conclusion and future work we will clarify limitations and possible extensions.
> 2. The definition of hue-equivariance is not precise in the paper. Ideally, it should include all rotations in the RGB space (i.e., SO(3)), but also consider the fact that many of these rotations take the color values out of the RGB space (unit cube). [...]
Agreed. To clarify, for an operation to be hue-equivariant, it should be equivariant only to all rotations around the [1,1,1] diagonal (not necessarily full SO(3)). It is indeed true that pixel values near the cube borders can fall outside of the RGB cube and need to be projected back. In practice, this caused some inconsistencies over varying hue (this explains inconsistent peaks in Fig 7 of supplemental material). We will better clarify this in the method and limitations sections. See also our answer to Reviewer gSRU.
> 3. Experiments compare with a standard CNN (+grayscale) as baseline. Other baselines can be included, for example [1], that considers invariance to illumination color/intensity.
Agreed. We added [1], as well as AugMix - please find the results above.
> 4. Experiments in the main paper only consider the group (i.e., 3 rotations in the RGB space), which seems limited in robustness [...].
Due to space limitations we had to restrict our analyses in the main paper to 3 rotations - in appendix D we provide an ablation study on the number of hue rotations. Figure 7 shows that increasing the number of rotations improves test-time hue shift performance. However, there is also a trade-off in model capacity as increasing the number of rotations increases the number of parameters in the model. We will add this to the paper.
---
**Questions:**
> 1. How is the hyperparameter chosen for different datasets in Table 1, especially for hue-shifted test dataset?
Due to space limitations, Table 1 in the main text shows only the best networks. The same value for $s$ is used for both the original and hue-shifted test set and we provide the full results in Table 1 of supplementary material - we will better clarify this in the main paper text and caption. Note that for any fixed $s$ the CE networks outperform the baseline model on the hue-shifted test set of almost all datasets.
> 2. Is the invariance captured by Grayscale-Z2CNN same as that considered in the paper, i.e., invariance to hue? Would a better preprocessing step be only considering a canonical representation in the hue dimension?
Interestingly, the grayscale invariance makes objects less distinct from their background (see fig 1 of the original paper). A canonical representation in the hue dimension would indeed retain background distinction and be hue-shift invariant. Yet, such a mapping is non-trivial as it depends on the semantic object/background colors. Such a canonical representation approach is similar to white balancing for achieving color constancy (see also our answer to Reviewer w9HG) . As such, finding a "canonical hue representation" is orthogonal to our equivariance approach and reminds us of a spectral variant of "spatial transformer networks" (ref below).
We will better explain the relationship with color constancy and white-balancing in related work; and add this alternative approach to the conclusion and future work.
- Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. "Spatial transformer networks." Advances in neural information processing systems 28 (2015).
> 3. In the color-imbalance experiments, were the CNNs trained with loss weighted according to the class imbalance?
This was not the case in the original experiments. We have re-run the experiments with a weighted loss and observed no significant differences to an unweighted loss. We will include both results in the paper / supp. mat.
> 4. Except being computationally expensive, are there other issues with using $H_n$ for higher $n$?
Increasing $n$ increases both computation cost as well as the model’s parameter count. As such, in fair experiments where the total number of parameters is to be equalized between tested models, there is a trade-off between increasing $n$ and increasing the number of channels. Increasing $n$ by too much therefore hurts the model capacity, as can be seen by the slightly decreasing performance at 0-deg. in Figure 7 of the supp. mat. We will further clarify this in the paper.
> 5. It would be helpful to add a few lines of summary on the ablation studies in Sec 4.2.
Definitely! We will add the following to our camera-ready version of the paper: “In short, we find a) that hue equivariant networks require less intense color jitter augmentation to achieve the same test-time hue shift robustness and accuracy, b) that removing group coset pooling breaks hue invariance, and c) that increasing the number of hue rotations increases robustness to test-time hue shifts.”
> 6. Can experiments in Section 4.1 include CNN+jitter as it seems to provide competitive performance in Section 4.2?
We included CNN+jitter - please find the results above. For long tailed MNIST, adding jitter makes solving the classification problem prohibitive, as color is required. For biased MNIST, performance decreases for small and improves for large $\sigma$, with CEConv still performing best. | Summary: This paper proposes color equivariant convolutional networks (CE-CNNs), a novel convolutional neural network architecture that achieves equivariance to hue changes. 
They introduce color equivariant convolutions that apply a discrete set of hue rotations to the convolution filters during the forward pass. This makes the network output invariant to corresponding hue changes in the input image.
They propose a group coset pooling layer that pools feature maps over the group of hue transformations to achieve invariance.
They evaluate CE-CNNs on several image classification datasets, showing improved robustness to hue changes at test time compared to regular CNNs. The method also improves absolute classification performance in some cases.
Overall, the paper presents a novel and intuitive technique to build invariance to hue changes into CNNs. The evaluations demonstrate its advantages over standard networks, especially under shifted hue distributions between training and test.
Strengths: This paper introduces a clever yet intuitive technique to make convolutional neural networks invariant to hue changes in the input image. The core idea is to apply discrete hue rotations to the convolution filters during the forward pass, essentially "baking in" robustness to color changes.
The paper is clearly written and easy to follow. The authors motivate the problem well, explain their proposed method succinctly, and provide thorough experimentation across image datasets. The visualizations offer useful insights, confirming that the networks learn consistent features across hues.
Overall, I found this to be an original and significant contribution. Invariance to hue shifts is a practical problem, and this paper tackles it through an elegant approach that outperforms regular CNNs. The concept of encoding transformations into convolutions seems powerful. While not the flashiest technique, the method is thoughtful, principled, and achieves strong results. The paper is presented clearly and comprehensively, making the ideas accessible. In summary, this is a high quality paper with both theoretical and practical value.
Weaknesses: - The method is demonstrated on image classification, but it's unclear how well it would generalize to other tasks like detection or segmentation. Additional experiments on other applications could strengthen the claims.
- The ablation study on number of hue rotations suggests performance varies across different shifts. It would be useful to dig deeper into why - is it an artifact of how shifts are applied? Better understanding this could improve results further.
- The approach encodes discrete hue rotations. An interesting extension could be supporting continuous rotations for finer-grained equivariance.
- The comparisons to "grayscale" networks should be interpreted carefully, as removing color information entirely handicaps models. Comparisons to networks pre-trained onImagenet may be more meaningful.
- The Flowers-102 experiments indicate the method doesn't help much on datasets without color bias. Analyzing when color equivariance helps or hurts could guide adoption.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The amount of color jitter augmentation during training seems to significantly impact performance. Could you do an ablation or analysis to tease apart the direct benefits of the color equivariant convolutions versus the data augmentation?
2. Have you evaluated the method on larger-scale datasets like ImageNet? Results on more complex data could better demonstrate the scalability.
3. The choice of 3 discrete rotations seems arbitrary. Can you analyze the impact of the group size to help guide selection? Are there benefits beyond 3 rotations?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. The comparison to only a ResNet baseline limits the conclusions on the benefits of the proposed method. Comparisons to other approaches could provide useful context.
2. The long-term impacts of building color equivariance into models is unclear. Discussion of downstream effects on fairness and interpretability could be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highly appreciate your helpful comments and the recognition of our efforts. Please find a point-by-point response to your remarks below.
---
**Weaknesses:**
> 1. The method is demonstrated on image classification, but it's unclear how well it would generalize to other tasks like detection or segmentation. Additional experiments on other applications could strengthen the claims.
This is true. yet, given the limited space we could only focus on image classification. Image classification models, including the tested ResNets, are commonly used as backbones for other applications. We will publish our code where CEConv and CE-ResNets are implemented as easy to use plug-and-play modules such as to accommodate for exploring other relevant tasks.
> 2. The ablation study on number of hue rotations suggests performance varies across different shifts. It would be useful to dig deeper into why - is it an artifact of how shifts are applied? Better understanding this could improve results further.
Great question! We analyzed this inconsistency and found that they results from the clipping caused by occasionally rotating outside of the RGB cube and projecting it back. While the values are now not exact; the general trend for improvement still holds: albeit no longer exact, but approximately the same. See also our answer to Reviewer 81VY.
We will add this analysis and conclusions to the paper; and add a warning about "color clipping" in the source code.
> 3. The approach encodes discrete hue rotations. An interesting extension could be supporting continuous rotations for finer-grained equivariance.
Definitely. Such “steerable” hue equivariance method would however require a quite different method from the proposed G-CNN-based approach. As this is an interesting direction to explore, we will add this idea to our discussion of future work.
> 4. The comparisons to "grayscale" networks should be interpreted carefully, as removing color information entirely handicaps models. Comparisons to networks pre-trained on Imagenet may be more meaningful.
Exactly - This is what hue-equivariance aims to remedy. See "Grayscale" in Fig 1 of the paper. We will better emphasis this.
> 5. The Flowers-102 experiments indicate the method doesn't help much on datasets without color bias. Analyzing when color equivariance helps or hurts could guide adoption.
This is indeed an important analysis, which we have looked into by investigating the color selectivity of CNN neurons when trained on different datasets (see Fig. 3 in the main paper). Color equivariance appears to be most beneficial whenever color selective neurons are learned - this is mostly the case for colorful datasets, such as Flowers-102. We will further clarify this in the text.
---
**Questions**
> 1. The amount of color jitter augmentation during training seems to significantly impact performance. Could you do an ablation or analysis to tease apart the direct benefits of the color equivariant convolutions versus the data augmentation?
Certainly - we provide an ablation study on the strength of color jitter augmentation in appendix D. Figure 5 shows that training with color jitter improves both the baseline and the hue equivariant models, however the equivariant model needs less jitter to achieve good generalization and achieves a higher accuracy overall. The reason for this is that the equivariant architecture only requires hue augmentation "between" the discrete rotations that it is already robust to, as opposed to the full scale of hue shifts for the baseline architecture. We will elaborate on this in the paper.
> 2. Have you evaluated the method on larger-scale datasets like ImageNet? Results on more complex data could better demonstrate the scalability.
We have performed experiments on Imagenet; the results are reported in Table 1 in the main text, as well as in Table 1 in the supplementary material. We find that also on more complex datasets color equivariance is beneficial, though model capacity is of bigger importance. The hybrid color equivariant architectures where only the first $s$ stages are equivariant demonstrate the best of both worlds, i.e. equivariance to hue shifts, while also allowing a bigger network width for the same number of parameters compared to a fully equivariant architecture.
> 3. The choice of 3 discrete rotations seems arbitrary. Can you analyze the impact of the group size to help guide selection? Are there benefits beyond 3 rotations?
In appendix D we provide an ablation study on the number of hue rotations. Figure 7 shows that increasing the number of rotations improves test-time hue shift performance. However, there is also a trade-off in model capacity as increasing the number of rotations increases the number of parameters in the model, as discussed in the Limitations section of the paper. The trade-off therefore depends on the amount of color vs. the complexity of the data - we will clarify this in the paper.
---
**Limitations**
> 1. The comparison to only a ResNet baseline limits the conclusions on the benefits of the proposed method. Comparisons to other approaches could provide useful context.
Thank you for the suggestion. We have performed additional comparisons with two baselines, CIConv and AugMix, the results and discussion can be seen in the general author rebuttal, and will also be included in the paper.
> 2. The long-term impacts of building color equivariance into models is unclear. Discussion of downstream effects on fairness and interpretability could be beneficial.
Improving performance on tasks where color is a discriminative feature could affect humans that are the target of discrimination based on skin tone. CEConvs benefit datasets with long-tailed color distributions by increasing robustness to color changes, which could be useful in reducing a CNN's reliance on skin tone as a discriminating factor. We will add these considerations to the paper.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Authors, thank you for the detailed responses to my comments. Your rebuttal addresses most of my questions and provides valuable discussion. In particular, the additional experiments analyzing color jitter augmentation better explain the advantages of your method. I have some follow-up thoughts:
1. The ImageNet experiments show model capacity is also important. This reminds us that there is a balance between color equivariance and network width when designing models. Further analysis from the authors on how to strike this balance would be helpful.
2. Thanks for the comparisons to CIConv and AugMix. These do provide more thorough baseline comparisons. If space allows, I'd recommend including these results in the main paper rather than just the rebuttal.
3. You mention more hue rotations improves robustness but also increases parameters. I strongly suggest analyzing the impact of group size on performance with fixed parameters. This would better illustrate the trade-off for readers.
4. Very glad to see the authors discuss effects on algorithmic fairness. This is an important aspect. Expanding this discussion would provide value if space permits.
Overall, your responses have addressed my questions and I'm more satisfied with the paper now. Thank you for the diligent work, I look forward to the final manuscript.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your follow-up remarks. We will make sure to include points 1, 2 and 4 in the main paper and / or supplementary materials, depending on available space. Regarding point 3: in the ablation study on the effect of the number of rotations in supp. mat. section D we have kept the parameter count fixed by downscaling the network width as we increase the number of hue rotations. We will make this more clear in the text. | Rebuttal 1:
Rebuttal: Thank you for the highly detailed and constructive feedback! In this rebuttal we address all points raised by the review team, leading to multiple improvements, including: evaluations against relevant baseline methods, clarifications, and insights. Below, we answer individual questions per reviewer in a point by point fashion. All answers will appear in the camera-ready version of the submission.
For the ColorMNIST experiments, we have performed additional experiments with color jitter augmentation. An updated version of Figure 2 is included in the rebuttal PDF. In both the "longtailed" setting (Figure 2a) and the low $\sigma$ settings of the "biased" setting (Figure 2b), color jitter hurts performance since the tasks require color as a discriminative feature. Only for the high $\sigma$ settings of the "biased" setting, color jitter improves performance since it induces color invariance. However, CECNN captures the best of both worlds: CECNN improves performance over the baseline Z2CNN in both tasks, regardless of $\sigma$.
For the image classification experiments, as multiple reviewers have suggested additional baselines for comparison, we will present the results here. We have included experiments with CIConv-W [1], as well as AugMix [2] - please find the results below. “Baseline” denotes a vanilla ResNet-18/44, [1] and [2] use the same ResNet with color invariant (CIconv) and equivariant (CEconv) convolutions, respectively. Despite discarding color information [1] indeed proves to be a robust baseline to hue shifts, though performing poorly on CIFAR due to the loss of detail by the edge detector. [2] slightly improves performance on both the original and hue-shifted test set, but its robustness lacks behind that of CEConv. Finally, for completeness we have trained CEConv with [2], yielding good performance on both test sets.
[1] Lengyel, Attila, et al. "Zero-shot day-night domain adaptation with a physics prior." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
[2] Hendrycks, Dan et al. "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty." Proceedings of the International Conference on Learning Representations (ICLR). 2020"
| Original test set | Caltech-101 | CIFAR-10 | CIFAR-100 | Flowers-102 | Oxford-IIIT Pet | Stanford Cars | STL-10 |
|:------------------|:-----------------|:-----------------|:------------------|:-----------------|:------------------|:-----------------|:-----------------|
| Baseline | 71.61 $\pm$ 0.88 | 93.69 $\pm$ 0.16 | 71.28 $\pm$ 0.20 | 66.79 $\pm$ 0.89 | 69.87 $\pm$ 0.57 | 76.54 $\pm$ 0.10 | 83.80 $\pm$ 0.36 |
| Baseline + Jitter | 73.93 $\pm$ 0.73 | 93.03 $\pm$ 0.16 | 69.23 $\pm$ 0.44 | 68.75 $\pm$ 1.50 | 72.71 $\pm$ 0.68 | 80.59 $\pm$ 0.36 | 83.91 $\pm$ 0.38 |
| CEConv | 70.16 $\pm$ 1.05 | 93.71 $\pm$ 0.26 | 71.37 $\pm$ 0.24 | 68.18 $\pm$ 0.45 | 70.24 $\pm$ 0.79 | 76.22 $\pm$ 0.19 | 84.24 $\pm$ 0.48 |
| CEConv + Jitter | 73.58 $\pm$ 0.68 | 93.51 $\pm$ 0.10 | 71.12 $\pm$ 0.57 | 74.17 $\pm$ 0.49 | 73.29 $\pm$ 0.63 | 79.79 $\pm$ 0.37 | 84.16 $\pm$ 0.10 |
| CIConv-W [1] | 72.85 $\pm$ 1.12 | 75.26 $\pm$ 0.57 | 38.81 $\pm$ 0.66 | 68.71 $\pm$ 0.29 | 61.53 $\pm$ 0.53 | 79.52 $\pm$ 0.42 | 80.71 $\pm$ 0.27 |
| CIConv-W [1] + Jitter | **74.38 $\pm$ 0.43** | 77.49 $\pm$ 0.53 | 42.27 $\pm$ 0.56 | 75.05 $\pm$ 0.39 | 64.23 $\pm$ 0.51 | *81.56 $\pm$ 0.32* | 81.88 $\pm$ 0.24 |
| Baseline + AugMix [2] | 71.92 $\pm$ 0.95 | 94.13 $\pm$ 0.22 | **72.64 $\pm$ 0.27** | 75.49 $\pm$ 0.24 | **76.02 $\pm$ 0.51** | **82.32 $\pm$ 0.07** | 84.99 $\pm$ 0.24 |
| CEConv + AugMix [2] | 70.74 $\pm$ 1.12 | **94.22 $\pm$ 0.16** | 72.48 $\pm$ 0.18 | **78.10 $\pm$ 0.50** | 75.90 $\pm$ 0.22 | 80.81 $\pm$ 0.27 | **85.46 $\pm$ 0.30** |
| Hue-shifted test set | Caltech-101 | CIFAR-10 | CIFAR-100 | Flowers-102 | Oxford-IIIT Pet | Stanford Cars | STL-10 |
|:------------------|:-----------------|:-----------------|:-----------------|:-----------------|:------------------|:-----------------|:-----------------|
| Baseline | 51.14 $\pm$ 0.71 | 85.26 $\pm$ 0.56 | 47.01 $\pm$ 0.38 | 13.41 $\pm$ 0.35 | 37.56 $\pm$ 0.76 | 55.59 $\pm$ 0.74 | 67.60 $\pm$ 0.56 |
| Baseline + Jitter | 73.61 $\pm$ 0.60 | 92.91 $\pm$ 0.17 | 69.12 $\pm$ 0.47 | 68.44 $\pm$ 1.60 | 72.30 $\pm$ 0.49 | 80.65 $\pm$ 0.36 | 83.71 $\pm$ 0.35 |
| CEConv | 62.17 $\pm$ 1.01 | 90.90 $\pm$ 0.25 | 59.04 $\pm$ 0.45 | 33.33 $\pm$ 0.38 | 54.02 $\pm$ 1.34 | 67.16 $\pm$ 0.58 | 78.25 $\pm$ 0.51 |
| CEConv + Jitter | 73.57 $\pm$ 0.75 | **93.39 $\pm$ 0.08** | **71.06 $\pm$ 0.53** | 73.86 $\pm$ 0.39 | **72.94 $\pm$ 0.56** | 79.79 $\pm$ 0.34 | **84.02 $\pm$ 0.14** |
| CIConv-W [1] | 71.92 $\pm$ 1.11 | 74.88 $\pm$ 0.54 | 37.09 $\pm$ 0.74 | 59.03 $\pm$ 0.62 | 60.54 $\pm$ 0.46 | 78.71 $\pm$ 0.33 | 79.92 $\pm$ 0.25 |
| CIConv-W [1] + Jitter | **74.40 $\pm$ 0.55** | 77.28 $\pm$ 0.54 | 42.30 $\pm$ 0.48 | **75.66 $\pm$ 0.27** | 63.93 $\pm$ 0.42 | **81.44 $\pm$ 0.26** | 81.54 $\pm$ 0.21 |
| Baseline + AugMix [2] | 51.82 $\pm$ 0.60 | 88.03 $\pm$ 0.26 | 51.39 $\pm$ 0.19 | 15.99 $\pm$ 0.28 | 48.04 $\pm$ 0.74 | 68.69 $\pm$ 0.73 | 72.19 $\pm$ 0.45 |
| CEConv + AugMix [2] | 62.29 $\pm$ 0.97 | 91.68 $\pm$ 0.21 | 60.75 $\pm$ 0.24 | 41.43 $\pm$ 0.97 | 62.27 $\pm$ 0.81 | 73.59 $\pm$ 0.30 | 80.17 $\pm$ 0.15 |
Pdf: /pdf/7cf242c002f2ecf1be476156abb617c426e45f30.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Hybrid-grained Feature Interaction Selection for Deep Sparse Network | Accept (poster) | Summary: The authors propose a novel feature selection algorithm, aiming to detect interactions between features at instance-level, contrary to the usual feature selection algorithms that selects the same features for every sample. Initially, the authors propose a highly memory-demanding approach, requiring an $m \times m$ matrix, being m the number of different values the features can take. Later, a less memory-demanding approach is presented, using matrix decomposition. The experimental results are slightly better to the state-of-the-art.
Strengths: - **Quality:** Several DSN methods were included in the state-of-the-art section. The proposed algorithm is able to obtain very similar results.
- **Clarity:** The paper is easy to follow and to understand. The decisions made are clearly motivated.
- **Significance:** The idea of selecting, per each sample, the most important interactions between features is very interesting and it can provide a good explanation about the decision making.
Weaknesses: - **Originality:** The algorithm is a combination of well-known techniques. The innovative part is focused on how to merge all of them.
- **Quality:** There exists other field of methods that also address feature interaction: the so-called *dynamic feature selection'. Techniques like L2X [1] are focused on the same goal, without the need of using DSNs, which highly reduces the memory consumption. Some information regarding these techniques should be included in the paper.
- **Clarity:** Fig. 3 is clearly misleading. Although it constantly suggest the proposed method outperforms the state-of-the-art, the granularity of the y-axis is almost non-existent. There are very little differences amongst all algorithms.
- **Significance:** I have concerns regarding two critical aspects of the experimental results:
1. The experimental results show very little improvements against the baseline methods. An statistical analysis is mandatory, in order to establish whether the obtained results provide a real improvement over the state-of-the-art or not.
2. Although I agree with the authors that feature interaction selection can provide insightful information regarding the decisions provided by the network, the authors do not mention anything related to this in the experimental section.
[1] Chen, J., Song, L., Wainwright, M., & Jordan, M. (2018, July). Learning to explain: An information-theoretic perspective on model interpretation. In International Conference on Machine Learning (pp. 883-892). PMLR.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weaknesses sec
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not applied.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Hi Reviewer eecX:
Thanks for reviewing our paper and offering helpful comments. Below are responses to your questions.
### **W1: Originality**
Please kindly allow me to highlight our papers' originality here, as our writings may not be optimal and can confuse the reviewer. Our initial intuition and major contribution is to extend the granularity of feature interaction selections from field level to value level. To the best of our knowledge, we are the first to propose such an extension. To tackle the *size explosion* problem caused by such an extension, we factorize the selection space. For better *selection efficiency*, we proposed a hybrid-grained selection algorithm, which efficiently selects feature interaction concurrently from both feature field and feature value.
Hopefully, our response can address the reviewer's concern about our originality, which might be caused by our writing.
### **W2 and Q1: comparison with techniques from other domains**
We explicitly want to thank the reviewer for mentioning L2X[3]. L2X is a classic yet inspiring work that utilizes mutual information scores as guidance to conduct *feature selection*. We are informed of this work before our project. However, we decide to exclude the discussion of these works mainly because L2X targets the *feature selection* problem(FS for short), while OptFeature targets the *feature interaction selection* problem(FIS for short).
The difference between FS and FIS generally lies in three aspects. First, the selection space is different. Given $m$ feature values, FS's selection space is smaller than $m$, while FIS's selection space is around $m^t$, with $t$ denoting the order ($t \geq 2$). Second, FIS generally requires fixed features. Third, FIS is considered an important problem in the DSN community, given that feature interaction is generally believed to be an important factor for performance boosting[1,2]. Comparably, FS is a general machine-learning problem.
Although the differences between FS and FIS do not mean we can not borrow intuition from works like L2X, directly borrowing off-the-shell methods might not be practical. Take L2X as an example. Even if we conduct a $t$-th order($t \ge 2)$ field-level FIS using L2X(which is much smaller than value-level FIS), the computation cost for the cross-entropy can be as large as $O(n^t * S)$, with $S$ and $n$ denoting the number of data samples and the number of feature fields in the dataset. Considering the benchmark we adopted contains $n \ge 10$ feature fields and $S \ge 10^7$ data samples (Please see Appendix B1 for details), it can be unbearable for real-world systems. This suggests that we need further investigation into this issue.
### **W3: Clarity in Fig. 3**
We thank the reviewer for pointing out the granularity issue of the y-axis in Fig. 3. However, as we can observe, the y-axis in Fig. 3 represents the AUC, which reflects the performance of models. So we think this point refers to the same issue as the following point, which is the significance of our result. Kindly check CW1 in the common response for a more detailed response.
### **W4 Part 1 and Q2: significance analysis**
We deeply appreciate the reviewer's effort in keeping this rigid standard of significance analysis, as they are extremely important for the community. Kindly check CW1 in the common response for details. In short, we add a two-tailed t-test to statistically analyze the significance of our result.
### **W4 Part 2 and Q3: insightful information**
We try to provide some insightful information in Appendix C4 and Figure 7 due to the page limit. Our message we can easily observe from Figure 7 is that: even though the field-level FIS methods retain certain feature interactions, the value-level FIS method still drops a significant proportion on the value level, resulting in better performance. Such an observation is consistent on all three benchmarks and serves as a justification of our initial intuition: field-level FIS is not good enough.
### **Q4: benefits with other algorithms**
We think the major benefit lies in both effectiveness and efficiency. As mentioned, we are the first to bring feature interaction selection to finer-grained and propose a hybrid-selection approach. Such a design can help us to select informative interactions, which leads to better performance. As a side-effect, given the uninformative feature interactions are dropped by our approach, the inference time and model size are slightly improved compared with other SOTA baselines, like PROFIT.
### Reference:
1. Factorization machines (ICDM 2010)
2. DeepFM: a factorization-machine based neural network for CTR prediction (IJCAI 2017)
3. Learning to explain: An information-theoretic perspective on model interpretation (ICML 2018)
*Please notice that we use the format paper "title(venue)" here due to the page limit. We fully appreciate all author's contributions to the community.*
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: I would like to thank the authors for their responses. Regarding to the Clarity in Fig. 3 issue, the problem is that the differences between the models seems higher than reality. For instance, in the first figure, the y-axis moves between 0.8090 and 0.8115. Visually, a value in the lower bound of the axis should mean a bad performance, and not just a minor decrease in the performance. The margin between the top and the bottom should be higher than the one used in the figures.
---
Reply to Comment 1.1.1:
Comment: Hi Reviewer eecX,
Sorry for the misunderstanding about your concern on Fig 3. Your argument is valid as the original Fig 3 can be confusing without careful examination. We will change the figure correspondingly as required. Kindly let us know if there is any further concern.
---
Rebuttal 2:
Comment: Hi Reviewer eecX,
We would again appreciate the valuable and thoughtful review. Since the deadline for the discussion period is approaching, it would be great to have feedback on if our response addresses the concerns raised in your initial review. | Summary: This work proposes a hybrid-grained feature interaction selection approach for deep sparse networks, which targets both feature field and feature value. The proposed approach uses a decomposed space that is calculated on the fly to explore the expansive space of feature interactions. The work also introduces a selection algorithm called OptFeature, which efficiently selects the feature interaction from both the feature field and the feature value simultaneously. The proposed approach is evaluated on three large real-world benchmark datasets, and the results demonstrate that the proposed approach performs well in terms of accuracy and efficiency. The work concludes that the proposed approach can effectively select feature interactions in deep sparse networks, and it has the potential to improve the performance of prediction tasks with high-dimensional sparse features.
Strengths: 1. The hybrid-grained feature interaction selection approach goes beyond traditional field-level selection, and the decomposed space and sparsification-based selection algorithm make the work appear to be a cutting-edge method to some extent.
2. This work ran the repetitive experiments with different random seeds five times and reported the average value for each result, and provides information about the parameter setup, metrics, datasets, baseline and parameter setup, so the experimental results appear to be reliable.
Weaknesses: 1. Novelty: The proposed approach that targets both feature field and feature value levels and introduced a decomposed space and a sparsification-based selection algorithm to explore the selection space, which appears to be a novel contribution to the field. But it does not provide a comprehensive review of related Feature Interaction Selection work in Section 2, and the novelty is not so obvious.
2. Experiments: What GPU was used in this work? How many were used? Were all the experiments conducted on the same GPU? Why formulate the hybrid-grained feature interaction selection as a binary selection according to Equation 6? Taking either 0 or 1 doesn’t seem to reflect ‘hybrid’.
3. Writing: the introduction does not summarize the main contributions of this work, so readers cannot intuitively get the advantages of this work. In addition, the content of Section 2.1 introducing Neural Architecture Search seems to be not very relevant to this paper. Furthermore, Section 3.3.2 does not explain how to determine the parameter α in Equation 6, which makes one wonder how to choose between value-grained and field-grained.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Experiments: What GPU was used in this work? How many were used? Were all the experiments conducted on the same GPU? Why formulate the hybrid-grained feature interaction selection as a binary selection according to Equation 6? Taking either 0 or 1 doesn’t seem to reflect ‘hybrid’.
2. Writing: the introduction does not summarize the main contributions of this work, so readers cannot intuitively get the advantages of this work. In addition, the content of Section 2.1 introducing Neural Architecture Search seems to be not very relevant to this paper. Furthermore, Section 3.3.2 does not explain how to determine the parameter α in Equation 6, which makes one wonder how to choose between value-grained and field-grained.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Hi Reviewer Bzgv:
Thanks for reviewing our paper and offering detailed comments. Below are responses to your questions.
### **W1: Unclear expression of our novelty**
Thanks for recognizing the novelty of our method. This comment does remind us of the importance of constantly highlighting our contribution and clearly differing our method from related works. Please check the common review regarding our revision.
### **W2 Part 1: Hardware platform**
We thank the reviewer for highlighting these details, as they can be important for reproducibility and can benefit the whole community. We listed these details below (also detailedly described in Appendix B3). Kindly check if they can address your question. However, the author's comment reminds us to be self-contained in the main paper. We will add corresponding references in the main paper later.
| Hardware | Config |
| --- | --- |
| CPU | 8-core Intel(R) Xeon(R) Gold 6140 CPU |
| GPU | Nvidia-Tesla V100-PCIe-32GB |
| Memory | 128GB |
| System | Ubuntu 18.04 LTS |
### **W2 Part 2: Design of hybrid-grained feature interaction selection**
Based on our humble opinion, the reviewer's confusion is about what part of the model is "hybrid". In our work, we define "hybrid" as one feature interaction conducts selection on the value level while another conducts selection on the field level. For instance, a 2nd-order feature interaction <*user id*, *City*> may conduct selection on the value level, as the user's preference may differ a lot. Meanwhile, another 2-nd order feature interaction <*date*, *city*> may conduct selection on the field level considering the selection efficiency. Hence, the "hybrid" refers to the different selection granularity between feature interactions.
### **W3: Writing**
We appreciate the reviewer pointing out the confusing and unclear points in our writing. Accordingly, we address each point the reviewer mentioned as follows.
#### *Introduction*
Please kindly allow us to highlight our papers' contributions here. We will revise the last paragraph of the introduction section accordingly.
First and foremost, we extend the granularity of feature interaction selections from field level to value level. To the best of our knowledge, we are the first to propose such an extension.
Second, to tackle the size explosion problem brought by such an extension, we decompose the selection space via tensor factorization.
Third, for the sake of selection efficiency, we propose a hybrid-grained selection approach (hybrid between value level and field level) named OptFeature.
Finally, we conduct extensive experiments to validate the efficiency and effectiveness of OptFeature on three large-scale benchmark datasets. Multiple ablation studies are also conducted to investigate different aspects of our approach, which can help readers better understand and utilize it in their works.
#### *Related Work (Section 2.1)*
We thank the reviewer for pointing this out. Generally speaking, the neural architecture search techniques are part of the techniques that inspired several previous works on feature interaction selection[1,2]. This is the same as continuous sparsification[3] in Section 2.2. To clarify the reviewer's concern, we will merge these two sections and highlight their relationship with our work at the beginning.
#### *Section 3.3.2*
We explicitly want to thank the reviewer for pointing this out, which helps to make our paper more self-contained. We forget to mention the determination of $\alpha$ in our paper. The reason is that determining $\alpha$ makes no difference from DARTS[4] and previous work[5,6]. It is relaxed as a continuous vector (each element between 0 and 1) during searching and discretized as a deterministic vector (each element is either 0 or 1). Please check the following for determining $\alpha$ in Section 3.3.2.
- Inspired by DARTS[4] and its success in previous works[1,5,6], we relax the hybrid tensor $\alpha$ into a continuous tensor $\alpha_c \in \mathcal{R}^{n^t}$, which can be trained jointly with other parameters via gradient descent. To ensure convergence, we apply the sigmoid function over $\alpha_c$. Hence, during training time, Eq 6 (original Eq 6 $\mathbf{A} = \alpha \mathbf{A}_f + (1 - \alpha) \mathbf{A}_v$) can be rewritten as:
$$\mathbf{A} = \text{sigmoid}(\alpha_c) \mathbf{A}_f + (1 - \text{sigmoid}(\alpha_c)) \mathbf{A}_v$$
We will also add the following sentence to Section 3.3.3, which describes how we conduct retraining.
- During the re-training stage, the optimal hybrid tensor is determined as $\alpha^* = \mathbb{1}_{\alpha_c > 0}$ following previous works[4-6].
### **Reference**
1. Autofis: Automatic feature interaction selection in factorization models for click-through rate prediction (KDD2020)
2. Autoias: Automatic integrated architecture searcher for click-through rate prediction (CIKM 2021)
3. Progressive feature interaction search for deep sparse network (NeurIPS 2021)
4. DARTS: Differentiable Architecture Search (ICLR 2019)
5. Memorize, factorize, or be naive: Learning optimal feature interaction methods for CTR prediction (ICDE 2022)
6. GAIN: A Gated Adaptive Feature Interaction Network for Click-Through Rate Prediction (Sensors 2022)
*Please notice that we use the format paper "title(venue)" here due to the page limit. We fully appreciate all author's contributions to the community.*
---
Rebuttal Comment 1.1:
Comment: Hi Reviewer Bzgv, We noticed that you increased your rating from 3 to 4, but we didn't receive your response. May we ask if there are any further concerns? Kindly let us know if you have any questions or any unaddressed concerns.
---
Rebuttal 2:
Comment: Hi Reviewer Bzgv,
We would again appreciate the valuable and thoughtful review. Since the deadline for the discussion period is approaching, it would be great to have feedback on if our response addresses the concerns raised in your initial review. | Summary: This paper tackles the problem of modeling fine-grained feature interactions in high-dimensional sparse features.
A hybrid-grained feature interaction selection method is proposed, which operates on both field and value for deep sparse networks.
To handle the increase in computation, a decomposed form of the selection space is done, which greatly reduces the computational requirements of modeling.
Results on deep sparse networks benchmarks show that the proposed method achieves SOTA results while being more computationally efficient.
Strengths: - Strong results in terms of performance on established benchmarks and computational efficiency, demonstrating the effectiveness of the proposed method.
- The proposed method seems generalizable and can be applied to other methods.
- All experimental parameters are provided, making reproduction straightforward.
- The writing is fairly clear and easy to understand.
Weaknesses: - Experimental results:
- The proposed method is a simple tensor decomposition for improved efficiency and the additional consideration of more features. Such choices are generalizable to other architectures (as mentioned in lines 109-112) but this is not demonstrated in the paper. I would like to see the application of the proposed components to other existing approaches.
- Significance of results:
- The AUC and Logloss scores differ by less than 0.001 between the proposed method and the previous SOTA. Is this significant? I suggest the authors add confidence intervals to Table 1 and 2 for easier comparison.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are discussed at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Hi Reviewer 57Bi:
Thanks for your effort in reviewing our paper and appreciating our effort. Below are responses to your questions.
### **W1: generalization**
We deeply agree with the reviewer about the importance of generalization.
To investigate this aspect, we include ablation studies regarding various interaction operations, embedding dimensions and feature interaction orders in Appendix C1, C2 and C3, respectively. However, the reviewer's comment reminds us to be self-contained in the main text. We will append additional references to the appendix sections in the main text regarding the generalization issue. Hopefully, this can address the reviewer's concern.
More broadly speaking, we agree with the reviewer that it would be interesting to see how our method can be combined with more complex methods, such as embedding dimension search or feature interaction operation selection. However, the major difficulty lies in disentangling the influence between different methods, which may lead to sub-optimal results. This, frankly speaking, is one of our future research projects. Kindly pay attention to our future papers regarding this aspect.
### **W2: significance**
Thanks for pointing this out! Adding the statistical analysis can better validate our method. Please check the common weaknesses(Cw1) for our response.
---
Rebuttal Comment 1.1:
Title: Follow-up by Reviewer
Comment: Thank you for providing detailed responses to my concerns. I have read through all the other reviews and responses. I will maintain my rating of "Weak Accept".
---
Reply to Comment 1.1.1:
Comment: Thanks again for appreciating our effort. We will make the corresponding change to the paper later. | Summary: This paper introduces a hybrid-grained feature interaction selection approach that targets both feature field and feature value for deep sparse networks and decomposes the selection space using tensor factorization and calculating the corresponding parameters on the fly.
Strengths: Extending the selection granularity of feature interactions from the field to the value level.
Introduce a hybrid-grained feature interaction selection space, which explicitly considers the relation between field-level and value-level.
The tensor decomposition and the sparsification are combined to perform selection on the shrinking space.
Weaknesses: 1.The evaluation datasets are pretty small (the feature number is around 11-26). For recommendation systems optimization work, it is usually better to show the results in large-scale datasets like industrial datasets to demonstrate the scalability and performance.
2.Missing several references:
AutoFAS,
NAS-CTR,
AutoIAS,
GAIN: A Gated Adaptive Feature Interaction Network for Click-Through Rate Prediction
Maybe adding some discussions or comparisons to them is better.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, the authors discussed some limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Hi Reviewer 21ZP:
Thanks for your effort in reviewing our paper and offering constructive suggestions.
### **W1: small datasets**
We thank the reviewer's helpful suggestion in extending datasets with relatively different statistics. Our response to your concern is split into the following two parts.
First, to ensure we are on the same page, we want to highlight the concepts of *feature field* and *feature value*. We include the following table regarding the statistics of our datasets (also detailedly described in Appendix B1). As we can observe, the number of feature values, which influence the maximum selection space, is around $\sim 10^6$ level.
| Dataset | \#Samples | \#field | \#values | pos ratio |
| --- | --- | --- | --- | --- |
| Criteo | $4.6 \times 10^7$ | 39 | $6.8 \times 10^6$ | 0.2562
| Avazu | $4.0 \times 10^7$ | 24 | $4.4 \times 10^6$ | 0.1698
| KDD12 | $1.5 \times 10^8$ | 11 | $6.0 \times 10^6$ | 0.0445
Second, we add one additional experiment on a private large-scale industrial dataset. This dataset contains nearly 3.5 million samples with 134 feature fields and 2498 feature values. Please notice that this dataset has more feature fields and fewer feature values, which differs from the previous benchmarks with fewer feature fields and larger feature values. The following results can further prove the effectiveness of OptFeature. The observations are also consistent with those on the other three public benchmarks. We will include these results in the appendix later.
| | AUC | LogLoss |
| --- | --- | --- |
| LR | 0.7745 | 0.2189 |
| FM | 0.7780 | 0.2181 |
| FNN | 0.7838 | 0.2168 |
| DeepFM | 0.7824 | 0.2179 |
| DCN | 0.7844 | 0.2167 |
| IPNN | 0.7883 | 0.2147 |
| AutoFIS | 0.7889 | 0.2146 |
| PROFIT | 0.7849 | 0.2161 |
| OptFeature | 0.7893 | 0.2142 |
### **W2: missing reference**
Thanks for referring to these relevant papers, as they are relevant and related. Please refer to the common response(CW2) for our response and corresponding changes.
---
Rebuttal Comment 1.1:
Title: Thanks for providing the results on large-scale industrial dataset.
Comment: Thanks for providing the results on the large-scale industrial dataset. I think the new comparisons addressed my concerns. I increase my score from borderline accept to weak accept.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank reviewer 21ZP for the further feedback, and we are glad that your concerns are addressed. We will make the proper change in the paper later. | Rebuttal 1:
Rebuttal: Hi Reviewers and PCs:
We want to thank all your effort in helping us improve this paper. Below are some of the common concerns. Kindly notice that we try to use points to answer reviewers' questions, as some of the weaknesses and questions are repetitive or similar.
### **CW1: result significance**
Thanks to all reviewers for highlighting this point. Here we list a table summarizing the recent and relevant papers working on the datasets we adopted. We list their relative improvements compared with the highest baseline in the original paper(N/A means the corresponding result is not contained). Compared with our relative improvement(last line of the table), We can observe that the improvement achieved by our proposed method makes sense over all three datasets.
| | Criteo AUC | Criteo LogLoss | Avazu AUC | Avazu LogLoss | KDD12 AUC | KDD12 LogLoss |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| AutoInt[1] | +0.0052 | +0.0053 | -0.0006 | +0.0005 | +0.0084 | +0.0020 |
| AutoFIS[2] | +0.0001 | +0.0000 | +0.0016 | +0.0009 | N/A | N/A |
| PROFIT[3] | +0.0001 | +0.0001 | +0.0027 | +0.0046 | N/A | N/A |
| AutoIAS[4] | -0.0045 | -0.0943 | -0.0001 | -0.0003 | N/A | N/A |
| GAIN[5] | +0.0005 | +0.0004 | +0.0001 | +0.0001 | N/A | N/A |
| NAS-CTR[6] | +0.0009 | +0.0004 | +0.0073 | +0.0049 | N/A | N/A |
| OptFeature(ours) | +0.0004 | +0.0004 | +0.0019 | +0.0005 | +0.0013 | +0.0002 |
We also conduct a two-tailed t-test over our OptFeature and the best-performed baseline, the corresponding $p$-value $<0.005$. This is usually denoted as statistically significant[2,6]. Corresponding changes will be made to the paper later.
### **CW2: missing discussion of some related works** and **unclear expression of novelty**
We thank reviewers 21ZP, Bzgv and eeCX for pointing out our flaws. We revise the 2nd paragraph in Section 2.3 to address these two issues. Kindly check the following:
```
With the advancement of neural architecture search[7-9] and continuous sparsification[10], various methods have been proposed to select the informative feature interactions[2,3] and reduce computational costs. AutoFis[2] employs a sparse optimizer to select suitable feature interactions at the field level. PROFIT[3] formulates field-level feature interaction selection within a distilled search space. It employs a progressive search for efficient exploration. AutoIAS[4] takes one step further to integrate the feature interaction selection as part of the search space and jointly conduct the search with other components like MLP architecture or embedding dimension. GAIN[5], on the other hand, focuses on the DCN[11]-like architectures and conducts the feature interaction selection jointly with the model training. However, all previous works conduct feature interaction selection on the field level. Our work builds on the existing approaches for modelling feature interactions in DSNs. More precisely, We extend the selection granularity to the value level and propose a hybrid-grained selection approach.
Furthermore, there also exists some works such as OptInter[12], AutoFeature[13], and NAS-CTR[6] take a different angle and search for suitable operations (such as inner product, outer product or element-wise sum) to model each feature interaction properly. These works are perpendicular to our study.
```
### **Reference**
1. Autoint: Automatic feature interaction learning via self-attentive neural networks (CIKM 2019)
2. Autofis: Automatic feature interaction selection in factorization models for click-through rate prediction (KDD2020)
3. Progressive feature interaction search for deep sparse network (NeurIPS 2021)
4. Autoias: Automatic integrated architecture searcher for click-through rate prediction (CIKM 2021)
5. GAIN: A Gated Adaptive Feature Interaction Network for Click-Through Rate Prediction (Sensors 2022)
6. NAS-CTR: Efficient Neural Architecture Search for Click-Through Rate Prediction (SIGIR 2022)
7. Neural Architecture Search with Reinforcement Learning (ICLR 2017)
8. Neural architecture optimization (NeurIPS 2018)
9. DARTS: Differentiable Architecture Search (ICLR 2019)
10. Estimating or propagating gradients through stochastic neurons for conditional computation (CoRR 2013)
11. Deep & cross network for ad click predictions (ADKDD@KDD 2017)
12. Memorize, factorize, or be naive: Learning optimal feature interaction methods for CTR prediction (ICDE 2022)
13. Autofeature: Searching for feature interactions and their architectures for click-through rate prediction (CIKM 2020)
*Please notice that we use the format paper "title(venue)" here due to the page limit. We fully appreciate all author's contributions to the community.* | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis | Accept (poster) | Summary: To be effective, a basis must be tailored to the specific set of time series data and exhibit distinct correlation with each time series within the set. As far as we know, the state-of-the-art methods are limited in their ability to satisfy both of these requirements simultaneously. To address this issue, the authors propose BasisFormer, an end-to-end time series forecasting architecture that leverages learnable and interpretable bases. Firstly, the authors acquire bases through adaptive self-supervised learning, which treats the historical and future sections of the time series as two distinct views and employs contrastive learning. Secondly, the authors design a Coef module that calculates the similarity coefficients between the time series and bases in the historical view via bidirectional cross-attention. Finally, the authors present a Forecast module that selects and consolidates the bases in the future view based on the similarity coefficients, resulting in accurate future predictions.
Strengths: 1. The authors acquire bases through adaptive self-supervised learning, which treats the historical and future sections of the time series as two distinct views and employs contrastive learning.
2. The authors design a Coef module that calculates the similarity coefficients between the time series and bases in the historical view via bidirectional cross-attention.
3. The authors present a Forecast module that selects and consolidates the bases in the future view based on the similarity coefficients, resulting in accurate future predictions.
Weaknesses: 1. The authors claim that this is a self-supervised learning model, and according to the reports of existing studies, the performance should be weaker than that of supervised models. I hope the authors can compare some time series prediction models based on self-supervised learning. Forgive me for being skeptical about the effect at the moment.
minor comment:
2. I would like the authors to be more clear about Eq. 7, why explainability is related to the smooth term, and to use the common l2-norm.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors addressed all my concerns. So, I have no further comments.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1 - comparison with self-supervised methods*
First, We would like to clarify that our model should not be classified as a self-supervised model in a strict sense. This is because our model utilizes a supervised loss function for predictions, while the alignment loss function and smoothing loss function are self-supervised. It is worth noting that our approach differs from classical self-supervised algorithms that involve a two-step process of self-supervised training followed by a supervised ridge regression. In contrast, we adopt an end-to-end training approach by integrating the self-supervised module with the prediction task.
To address the reviewer's request, we have compared our model with several recent self-supervised methods, namely [1] and [2]. The comparative results are presented in the following table. For the experiments, we fix the input length at 96 and vary the output length from 96 to 720. **The table below illustrates that Basisformer outperforms Cost in terms of both MSE and MAE across all cases.** It is important to note that Cost is trained using a two-step process: self-supervised training followed by supervised ridge regression. In contrast, Basisformer trains the basis, coef, and forecast modules end-to-end. This enables Basisformer to learn a basis that is specifically tailored for forecasting, potentially leading to improved performance.
| Models | |ours | | Cost | |
|:-----------:|:---:|:------:|:------:|:------:|:------:|
| Metric | output_length | MSE | MAE | MSE | MAE |
| ETT | 96 | 0.184 | 0.266 | 0.280 | 0.375 |
| | 192 | 0.248 | 0.307 | 0.480 | 0.506 |
| | 336 | 0.321 | 0.355 | 0.805 | 0.676 |
| | 720 | 0.410 | 0.404 | 1.562 | 0.955 |
| electricity | 96 | 0.165 | 0.259 | 0.199 | 0.290 |
| | 192 | 0.178 | 0.272 | 0.199 | 0.292 |
| | 336 | 0.189 | 0.282 | 0.212 | 0.307 |
| | 720 | 0.223 | 0.311 | 0.246 | 0.338 |
| exchange | 96 | 0.085 | 0.205 | 0.263 | 0.393 |
| | 192 | 0.177 | 0.299 | 0.464 | 0.521 |
| | 336 | 0.336 | 0.421 | 0.833 | 0.691 |
| | 720 | 0.854 | 0.670 | 1.192 | 0.871 |
| traffic | 96 | 0.444 | 0.315 | 0.576 | 0.377 |
| | 192 | 0.460 | 0.316 | 0.546 | 0.359 |
| | 336 | 0.471 | 0.317 | 0.555 | 0.363 |
| | 720 | 0.486 | 0.318 | 0.591 | 0.379 |
| weather | 96 | 0.173 | 0.214 | 0.372 | 0.415 |
| | 192 | 0.223 | 0.257 | 0.528 | 0.517 |
| | 336 | 0.278 | 0.298 | 0.835 | 0.666 |
| | 720 | 0.355 | 0.347 | 1.394 | 0.894 |
| illness | 96 | 1.550 | 0.814 | 2.330 | 0.923 |
| | 192 | 1.516 | 0.819 | 2.497 | 0.962 |
| | 336 | 1.877 | 0.907 | 2.650 | 1.032 |
| | 720 | 1.878 | 0.902 | 2.829 | 1.062 |
*Q2 - I would like the authors to be more clear about Eq. 7, why explainability is related to the smooth term, and to use the common l2-norm.*
The incorporation of a loss term related to smoothness serves two significant purposes. **Firstly, it plays a crucial role in mitigating the risk of the learned basis fitting the noise present in the data, which would result in the emergence of high-frequency patterns.** If the basis is corrupted by noise, it becomes difficult to extract meaningful information such as trends or seasonality. Therefore, ensuring a smooth basis is essential for obtaining interpretable results.
**Secondly, the inclusion of the smoothness loss term directly impacts the overall performance of the model.** This is evident from the results of ablation experiments involving different loss functions, as presented in Table 5. As our primary objective is long-range future forecasting, it is advantageous to have smoothly changing patterns in the basis.
---
Rebuttal Comment 1.1:
Title: Thanks to the author reply, which addressed all my concerns.
Comment: I have no further questions and I have improved my confidence score.
---
Reply to Comment 1.1.1:
Title: reply to reviewer yNQz
Comment: We extend our gratitude to the reviewer yNQz for offering meticulous comments and encouraging feedback. Your insightful suggestions have greatly contributed to enhancing the quality of our paper. | Summary: This paper addresses the problem of finding effective bases for time series forecasting models. Current methods are limited in their ability to satisfy the requirements of being tailored to specific time series data and exhibiting distinct correlation with each time series. To tackle this challenge, the authors propose BasisFormer, an end-to-end architecture that leverages learnable and interpretable bases. Bases are obtained through adaptive self-supervised learning, where historical and future time series sections are treated as distinct views and contrastive learning is used. The proposed architecture includes a Coef module that calculates similarity coefficients between time series and bases using bidirectional cross attention, and a Forecast module that selects and consolidates bases for accurate future predictions. Extensive experiments on six datasets demonstrate that BasisFormer outperforms previous methods for both univariate and multivariate forecasting tasks, achieving considerable improvements in performance.
Strengths: This paper has studied a time series forecasting problem. The paper is well written. The significance of this work lies in addressing the limitations of existing methods in time series forecasting by proposing BasisFormer, an architecture that leverages learnable and interpretable bases. This approach allows for tailored modeling of specific time series data and distinct correlations with each time series, leading to improved forecasting accuracy.
Weaknesses: However, I still have several concerns towards this paper. First, the illustration and motivation for the basis/bases is not very clear. What are the basis in essence? How can you define basis in time series? Are basis really needed for time series forecasting? What good effects can basis learning bring up? How to verify it? Second, in the main contribution, though the authors combine several techniques together, including self-supervised learning, contrastive learning, cross attention, basis selection. However, I don't find the necessity for each component. Third, the experimental results are less convincing. The proposed method adopts contrastive learning, but it does not compare the performance with other self-supervised learning methods, such as [1] [2]. Other works adopt periodic basis are also not discussed in related work [3].
[1] Cost: contrastive learning of disentangled seasonal-trend representations for time series forecasting. ICLR 2022.
[2] Time-Series Representation Learning via Temporal and Contextual Contrasting. IJCAI 2021.
[3] DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting. ICLR 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1 - motivation for baisis*
As mentioned in Lines 20-22 on Page 1, bases are defined as **sequences that capture the underlying temporal patterns for a set of time series and serve as the key factors driving changes in the data over time**. They may encompass trends, seasonalities, and other vital elements that aid in modeling and forecasting time series data.
Regarding the necessity of bases, **it is worth noting that nearly all time series forecasting models can be viewed as basis-driven models.** For example, Transformer-related models [8] [9] rely on covariate encoding, akin to bases, and some models explicitly depend on specific bases such as Fourier or Legendre bases [2] [3]. Experimental validation in this regard has been carried out previously, as exemplified by the following table citing experimental results from Dlinear [4]. In the presented experiments, the absence of bases (i.e., covariates) represented by timestamps (denoted as "wo/Temp") leads to a significant decrease in model performance, underscoring the importance of bases. It is important to note that the values in the table are presented in MSE format, where smaller values indicate better performance.
| Methods | Embedding | 96 | 192 | 336 | 720 |
|:----------:|:---------:|:-----:|:------:|:-----:|:-----:|
| Fedformer | ALL | 0.597 | 0.606 | 0.627 | 0.649 |
| | wo/Temp | 0.613 | 0.623 | 0.65 | 0.677 |
| Autoformer | ALL | 0.629 | 0.647 | 0.676 | 0.638 |
| | wo/Temp | 0.681 | 0.665 | 0.908 | 0.769 |
| Informer | ALL | 0.719 | 0.696 | 0.777 | 0.864 |
| | wo/Temp | 0.754 | 0.780 | 0.903 | 1.259 |
The objective of Basisformer is to further show that learnable basis is preferred to manually specified basis for time series forecasting, and the usefulness of learnable basis is demonstrated in Table 5 (Page 8) and Table 6 (Page 9). In Table 5, by learning the basis adaptively from the data via the InfoNCE loss (cf. Section 3.3), our proposed Basisformer achieves an average performance improvement of 5.2%. Table 6 further supports this finding by showing that using a learnable basis instead of the manually specified covariates in Autoformer [8] and Fedformer [9] also leads to performance enhancement.
*Q2 - contribution of contrastive learning and cross attention*
As discussed in the second paragraph of Section I, the application of bases for time series forecasting involves three essential steps. Firstly, an appropriate basis is learned for the set of time series under consideration. Secondly, each time series in the set is decomposed based on the learned basis, which entails calculating coefficients or weights that represent the similarity or projection energy of the time series with respect to each vector in the basis. Finally, the prediction is determined by aggregating the future part of the basis using the computed weights.
To this end, we employ self-supervised learning, specifically contrastive learning, to learn the basis, as described in Section 3.3. Cross attention is then utilized to perform basis projection or selection, which corresponds to the second step, as explained in Section 3.1.
**To demonstrate the necessity of the self-supervised basis learning module, we conducted an ablation study (Lines 283-289, Page 7).** The results show that employing only the standalone self-supervised loss, namely the infoNCE loss, leads to an average performance improvement of 5.2%.
Furthermore, we conducted additional experiments to validate the effectiveness of cross attention. In these experiments, we removed cross attention and basis selection by setting the number of stacked BCABs (Bidirectional Cross-Attention Blocks) in the model to 0. The experimental results are presented in the table below.
| BCAB number | 0 | 1 | 2 | 3 | 4 |
|-------------|-------|-------|-------|-------|-------|
| Metric | MSE | MSE | MSE | MSE | MSE |
| 96 | 0.186 | 0.166 | 0.166 | 0.168 | 0.171 |
| 192 | 0.187 | 0.176 | 0.176 | 0.176 | 0.179 |
| 336 | 0.208 | 0.187 | 0.190 | 0.190 | 0.191 |
| 720 | 0.244 | 0.228 | 0.218 | 0.234 | 0.237 |
| avg | 0.206 | 0.189 | 0.187 | 0.192 | 0.194 |
**The table highlights that, without altering other conditions, the absence of BCAB (i.e., cross attention for basis selection) resulted in an average performance decrease of 10.16% compared to the best-performing configuration with 2 layers of BCAB.** This emphasizes the effectiveness of cross attention in the model.
*Q3 - comparison with self-supervised methods*
Due to time constraints, we are currently able to present comparative results with only one method, Cost. For the experiments, we fix the input length at 96 and vary the output length from 96 to 720. **Table R1 in the PDF attached to the general response illustrates that Basisformer outperforms Cost in terms of both MSE and MAE across all cases.** It is important to note that Cost is trained using a two-step process: self-supervised training followed by supervised ridge regression. In contrast, Basisformer trains the basis, coef, and forecast modules end-to-end. This enables Basisformer to learn a basis that is specifically tailored for forecasting, potentially leading to improved performance.
**Regarding DEPTS, we will discuss it in Section 2 as follows: **"DEPTS tackles the challenges posed by intricate dependencies and multiple periodicities in periodic time series through the implementation of a deep expansion learning framework. However, the complex initialization and optimization strategies employed by DEPTS, as well as its limitation of only being applicable to periodic sequences, have motivated us to develop a simpler and more universally applicable basis learning framework."
---
Rebuttal 2:
Title: Request for Reviewer CQBS to respond to authors' rebuttal
Comment: Reviewer CQBS, as there are only 2 days left in the author discussion period, would you please read the authors' response, explain the extent to which their answers address your concerns, and whether you will adjust your rating.
If you decide to keep your score, please justify this decision, specifying which aspects of the paper or response have been the deciding factors in you keeping your score. | Summary: This paper studies basis learning for time series forecasting for which the past and future basis representations are aligned. Contrastive learning is used to build the time series basis and similarity between the past values and basis is used for time series prediction. The experiments on several time series forecasting datasets show improvement, especially for multivariate forecasting, when the input length (history) is short.
Strengths: - The paper is overall well written
- The idea of using contrastive learning to learn basis is sound
- The proposed approach improves over baselines especially for multivariate case
Weaknesses: - There is a strong emphasis on the consistency of the representations for past and future but this is directly experimented/evaluated
- Reproducibility is questionable without source code and scripts to run as the approach composed of several components and steps of training
- Proposed approach is particularly improves over baselines when the history to predict is shorter, however the motivation/need for a shorter history is not supported
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Basisformer seems to have more advantages over baselines for the multivariate time series forecasting, how do you explain this?
- How models are trained for the longer input sequence? Is it only inference time change?
- Increasing the input length gives more advantage to the baselines. Do you conduct further analysis of the inference time versus performance for different length of input?
- Do you plan to share the implementation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1 - past & future consistency*
Thank you for bringing this to our attention. **We have shown the consistency of representations for the past and future sequences in our model in Figure R1 in the PDF file attached to the general response**, where the attention mechanisms exhibit significant similarities between the past and future sequences. This demonstrates the consistency in the base elements between the two.
**Moreover, we have validated the significance of this past and future consistency in Table 5 (Page 8) and Table 6 (Page 9). **In Table 5, by ensuring such consistency through the InfoNCE loss, our proposed Basisformer achieves an average performance improvement of 5.2%. Table 6 further supports this finding by showing that consistency in Autoformer [8] and Fedformer [9] also leads to performance enhancement.
*Q2 - reproducibility*
As mentioned at the end of Abstract, full code of our model will be available at the time of publication.
*Q3 - motivation for a short history*
As mentioned in Lines 392-393 in the supplemtary material, "Throughout our research, we maintain consistency in our experimental settings by fixing the input length to be $96$ (with a reduced input length of $36$ for the illness dataset), instead of using a longer length. The main rationale behind this decision is that, in practical scenarios where the model is deployed as an online service and tasked with predicting a long range of the future at a granular level of minutes or hours, collecting a lengthy history (i.e., spanning 720 timestamps) for a large number of time series in real-time can be quite challenging. Therefore, **the adoption of an input length of 96 proves to be more practical and feasible."**
Indeed this is a common practice in N-beats[1], Autoformer[8], and Fedformer[9].
On the other hand, to further demonstrate the superiority of Basisformer for longer input sequences, we have conducted experiments in Section A.2 in the supplementary material. As discussed in Lines 412-418, "concerning longer inputs, our method surpasses recent approaches such as Dlinear, FiLM, and N-HiTS, with an average MSE performance improvement of 1.35\%, 0.63\%, and 7.75\%, respectively, and a corresponding evaluation MAE performance improvement of 3.15\%, 2.33\%, and 4.06\%, respectively. It is noteworthy that our approach requires an input length of 192 (72 for the illness dataset), which is at least 40\% lower than the input length of the other three methods. **Furthermore, for even longer input lengths, our model's performance can be further enhanced, signifying that our approach can leverage limited data more efficiently." **
*Q4 - performance is better for multivariate time series*
The advantages of Basisformer over baselines for multivariate time series forecasting can be explained by considering the nature of multidimensional time series and the role of our model's basis.
The basis functions in our model play a crucial role in capturing the primary patterns within the time series, akin to the principles of principle component analysis (PCA) to some extent. It is important to note that PCA performs better when the observed data are more correlated.
Multidimensional time series, in comparison to single-dimensional ones, tend to have higher levels of correlation. For instance, traffic datasets often exhibit peaks during morning and evening hours, indicating correlated patterns. Consequently, the presence of such correlations allows our model to effectively learn the basis functions, leading to improved forecasting performance.
**In summary, the utilization of basis functions in Basisformer, combined with the presence of correlated patterns in multidimensional time series, allows for better noise mitigation and identification of key patterns. **This ultimately enhances the overall performance of our approach for multivariate time series forecasting.
*Q5 - how to train the model for longer inputs*
Our model is trained using an end-to-end approach. When dealing with longer input sequences, we follow a similar methodology as N-hits [2] and Dlinear [4]. Specifically, we retrain the model using the extended input sequences along with their corresponding output sequences. Consequently, both the training and inference times are modified to accommodate the longer input sequences.
*Q6 - time vs input length*
We have provided a table below that displays the inference times for various input lengths, denoted as $I={96,192,336,720}$, and output lengths, denoted as $O={96,192,336,720}$. These measurements are based on the "exchange" dataset.
| | O=96 | O=192 | O=336 | O=720 |
|-------|-----------|-----------|-----------|-----------|
| **I=96** | 0.000833 | 0.001211 | 0.001419 | 0.002110 |
| **I=192** | 0.000884 | 0.001285 | 0.001437 | 0.002139 |
| **I=336** | 0.000893 | 0.001338 | 0.001469 | 0.002194 |
| **I=720** | 0.000941 | 0.001364 | 0.001547 | 0.002246 |
The table above illustrates the average inference time per instance of our algorithm under different configurations, measured in seconds. As observed, our algorithm exhibits notable speed, averaging at the millisecond level. Furthermore, when comparing the increase in output length to the extension of the input length, the additional inference time incurred by augmenting the input length is minimal. This is attributed to the preprocessing step where the input sequence, regardless of its length, is projected into a fixed-length (usually 100) sequence using a linear layer. **As a result, extending the input length does not significantly amplify the time consumption in our method.**
---
Rebuttal 2:
Title: Request for Reviewer z6WM to respond to authors' comments
Comment: Reviewer z6WM, as there are only 2 days left in the author discussion period, would you please read the authors' response, explain the extent to which their answers address your concerns, and whether you will adjust your rating.
If you decide to keep your score, please justify this decision, specifying which aspects of the paper or response have been the deciding factors in you keeping your score. | Summary: This paper proposed BasisFormer which is an end-to-end time series forecasting model that leverages learnable and interpretable bases. BasisFormer treats the historical and future sections of the time series as two distinct views and using contrastive learning. By making use of Coef module and Forecast module, the BasisFormer outperforms previous state-of-the-art methods for univariate and multivariate forecasting tasks.
Strengths: 1. Contrastive learning objective is applied for basis learning which guarantees the consistency between the historical and future sections of the time series. And when applying the SSL module to other frameworks, there is a performance improvement of approximately 5%, which suggests the general application of the learnable basis.
2. Based on the experiment results on six datasets, the proposed BasisFormer model outperforms previous SOTA methods on univariate and multivariate forecasting tasks.
3. The network architecture of BasisFormer is carefully designed and well analyzed through ablation studies and model comparison. The paper is well written and easy to understand.
Weaknesses: In 4.3, the author analyzed the interpretability of the learned bases by visualizing the time series and the corresponding learned basis. An additional visualization of the attention distribution in BCAB module can be helpful for checking different weights assigned to each basis in different attention heads, and thus understanding the network behavior.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: At this point, I don't have specific questions to ask. The paper is clearly written.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1 - Visualization*
Thanks for pointing this out! We have incorporated the visualization of the attention map of the BCAB module on the traffic dataset, as depicted in Figure R1 in the PDF file attached to the global response. This visualization demonstrates that different time sequences have distinct attention scores for different the same set of basis vectors.
Additionally, we have provided a visualization of a specific time sequence alongside the features corresponding to the highest and lowest attention scores, as shown in Figure R2. The highest attention score is 0.2316, while the lowest attention score is 0.03371. Figure R2 highlights that the representation with a total of 8 sets of main peaks (Figure R2(c)) more comprehensively captures the patterns of the data compared to the configuration with only 2-3 main peaks (Figure R2(b)). This indicates a correlation between the attention scores and the relationship between time sequences and features.
It is important to note two key points. Firstly, since bases represent condensed patterns of time sequences, it is unlikely for a base to be identical to any single time sequence, especially when N is small. Secondly, after the bases are processed through multiple linear and nonlinear layers in the network, they correspond to predicted sequences. Therefore, the numerical values of the bases serve as reference points only. **The focus should be on the patterns exhibited by the bases.** | Rebuttal 1:
Rebuttal: **General Response to All Reviewers**
We sincerely thank all the reviewers for their valuable suggestions. We are delighted by the unanimous recognition of our work and appreciate the reviewers' positive feedback on the carefully designed network architecture and the use of contrastive learning in BasisFormer.
We have thoroughly reviewed each of the reviewers' questions and suggestions, and we are grateful for their patience and diligence. In response, we have conducted additional experiments, introduced a new baseline, provided visualizations to address any lingering questions, and emphasized the significance of our work.
In the following rebuttal, we address each reviewer's comments individually. The reviewer’s comments are shown in italics. The paragraph(s) following them is the authors’ response. Unless otherwise specified, all references to pages, equations, sections, and citations refer to the original paper. Additionally, figures, tables, and citations prefixed with "R" (e.g., [R1]) are newly added citations in this rebuttal. All newly added images and a table are enclosed within a separate single-page PDF attached to this general response. We will incorporate the suggested revisions into the final camera-ready version to enhance the clarity and persuasiveness of our paper.
Once again, we would like to express our gratitude to the reviewers for their insightful feedback, which has helped us identify areas for improvement and refine our work. We welcome any further insights or concerns that would contribute to enhancing the paper according to the reviewers' perspectives.
Pdf: /pdf/99c868301ef5768b68a1dd734b2bcfa987dc1d6d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors of the paper propose a time series forecasting architecture with a self-supervised method for basis learning, called BasisFormer. Their assumption is that the selection of basis for a time series is consistent across both historical and future sections of the time series. They introduce a Coef module that measures the similarity between the time series and the basis in the historical view via bidirectional cross-attention, and a Forecast module that consolidates vectors from the basis in the future view according to the similarity yielded by the Coef module. In their evaluation they demonstrate improvements in forecasting tasks.
Strengths: 1. The paper is well-written with clear explanations of the proposed architecture.
2. The empirical results, including the ones that were presented in the supplementary material, support the original claims across the manuscript.
Weaknesses: 1. In order to further validate the claims presented in this work, I would expect seeing another comparison to methods involve discretization of time-series, such as [Moskovitch, R. and Shahar, Y., 2015. Classification of multivariate time series via temporal abstraction and time intervals mining. Knowledge and Information Systems].
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What happens if bases are not learnable directly from the time-series, but require a domain expert's KB?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The novelty of this work is limited to cases where bases do exist and are detectable automatically in the input time-series, where in many cases the provided input is shorter than the length required for identifying cases of seasonality, for example.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Q1 – Comparison with time series discretization*
We appreciate your suggestion to include a comparison with methods involving the discretization of time-series, such as Moskovitch and Shahar (2015). Unfortunately, we were unable to find the corresponding code and data for comparison. Therefore, we have chosen an alternative method for comparison, namely Boss [R1], which utilizes the Bag-Of-SFA-symbols method for feature extraction.
In order to evaluate the effectiveness of our approach, we have conducted a classification task using several UCR datasets. The description of the datasets is as follows:
| Dataset | Train Size | Test Size | Length | Classes | Type | is_predictable | description |
|---------|------------|-----------|--------|---------|-----------|----------------|---------------------------------------------------|
| Mallat | 55 | 2345 | 1024 | 8 | SIMULATED | Y | a simulated dataset |
| Rock | 20 | 50 | 2844 | 4 | SPECTRO | Y | rock examples from the ASTER spectral library |
| Phoneme | 214 | 1896 | 1024 | 39 | SOUND | N | Each series is extracted from the segmented audio |
| FaceUCR | 200 | 2050 | 131 | 14 | IMAGE | N | rotationally aligned version of facial outline. |
To adapt our method for classification, we followed these steps:
1. We partitioned the sequence into past and future parts, uniformly dividing them in a 6:4 ratio for all datasets. Different partitioning methods can be explored in future research to improve the model's performance.
2. We used a self-supervised approach for training, reserving 10% of the original training data for validating self-supervised performance. The remaining data was used for training, and the self-supervised loss function included prediction, alignment, and smoothness losses. Early termination based on validation set performance was done with a patience of 3.
3. From the well-trained self-supervised model, we extracted the aggregation coefficient matrix, specifically from the past perspective. This matrix was flattened to create sequence features, which were then fed into a random forest classifier for final classification. Notably, during self-supervised training, both past and future sequences were used for consistency, but only the past coefficient matrix was utilized in the classifier.
We conducted a fair comparison by extracting features using both Boss and our model, ensuring that our feature parameter count did not exceed Boss's. We employed a random forest classifier with 100 features and a maximum depth of 30 for classification, and the results are summarized in the table below.
| | Boss | Basisformer | Basisformer |
|---------|-------------|-------------|-------------|
| | acc | acc | valid_loss |
| Mallat | 0.83 | 0.87 | 0.12 |
| Rock | 0.56 | 0.72 | 0.18 |
| Phoneme | 0.20 | 0.07 | 1.27|
| FaceUCR | 0.68 | 0.41 | 1.68|
**The applicability of our method to self-supervision relies on predictability and consistency between past and future data.** The validation set loss in the table indicates that datasets lacking predictability have high validation losses, posing challenges for loss function optimization.
The Phoneme and FaceUCR datasets lack predictability. The Phoneme dataset includes speech segments from different individuals with random content before and after, while the FaceUCR dataset consists of flattened one-dimensional vectors of rotated face images, both lacking inherent predictability. These datasets require a holistic understanding of the entire sequence for meaningful interpretation, and as a result, our proposed Basisformer struggles to extract useful features, leading to lower performance compared to Boss.
On the other hand, datasets like Mallat and Rock, exhibiting predictability and low validation losses, allow our approach to achieve superior performance over Boss. **Surprisingly, we achieve this performance using only the representation of the past sequence as input for the classifier.**
*Q2- non-learnable basis*
Our motivation in this research is to demonstrate the superiority of learnable bases compared to manually chosen bases. **We validated this by replacing the learnable basis with commonly used bases, such as sine-cosine encoding and covariate embedding**, in Table 3 (Page 8). However, we recognize that if a domain expert possesses comprehensive knowledge of the time series characteristics and designs a basis specifically for that set, the manually chosen basis may be comparable or even superior to a learnable basis.
*Q3- input length shorter than a period*
We respectfully disagree with the mentioned comment. In our research, the notion of the basis expands beyond the sole identification of seasonality. The basis serves as a condensed representation or summary of the inherent shape or pattern within a collection of time series data. As the basis is learnable, it can encompass any relevant information that aids in forecasting.
It is important to note that our approach involves learning both the historical and future components of the basis from the training data. During inference, these learned components are fixed. **As a result, our forecasting process solely requires understanding how these components map to each other in order to predict the future.**
[R1] Schäfer, Patrick., 2015. The BOSS is concerned with time series classification in the presence of noise. DMKD
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I have improved my confidence score.
---
Reply to Comment 1.1.1:
Title: reply to reviewer u9Fd
Comment: We appreciate your suggestions once again, as they have provided valuable insights for improving our paper. | null | null | null | null | null | null |
Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction | Accept (poster) | Summary: The authors tackle a very relevant problem in computed tomography: metal artifact reduction. They recognize some severe gaps in conventional methods (both model-based and deep-learning based) and propose a theoretically sound approach to solve these gaps using implicit neural representations in a somewhat similar fashion as, for example, NeRP. However, to solve the problem at hand, the first derive a polychromatic (as opposed to standard monochromatic) forward model to model the nonlinear CT acquisition process. In combination with a newly developed loss function, they use this model to apply some constraints to the physical properties of polychromatic CT and incorporate the implicit neural representation reconstruction approach. The method is not pre-trained and optimized per scan. They perform better than model-based and supervised deep learning methods on three datasets.
Strengths: I must say that reading this paper was a pleasure. Overall, the manuscript is written very clearly and easy to follow. The ideas are original, and motivated well, and the results are discussed and explained appropriately.
I think the idea builds nicely upon previous work, aims to tackle a very relevant problem in medical imaging, and applies to a wide range of medical imaging settings. Therefore, the clinical relevance of the work is significant.
Section 2.1 provides a clear and sound theoretical foundation to understand their approach.
All ablation studies are supported by figures that show the importance of the components of the proposed architecture, and the other experiments are done on three different data sets.
It is a great strength that the method outperforms or performs equally compared to supervised models without using any training data.
Weaknesses: I am quite familiar with INRs and methods such as NeRP, etc. I feel that the manuscript is complete and does not have severe weaknesses. There are, however, three things that I consider minor weaknesses. Firstly, the paper lacks an evaluation of a real-world patient data set. There are experiments on synthetically induced metal artifacts and the walnut scan. Still, in the end, the method should work on real patient data, and it would be better if the method were evaluated in that setting or if the authors would motivate why this was not done. Secondly, I think 2D fan beam CT is a bit old-fashioned. It would contribute to the strength of the paper – and its applicability in the real world – if the method would also work on cone-beam CT. Lastly, the speed of the approach limits the method significantly. 2 minutes for a 256x256 slice means about 4 minutes for a 512x512 slice. For brain CT, this would take 400 minutes, which is not clinically feasible. I suggest discussing potential methods to optimize the optimization process to make it more relevant for clinical application.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1) You assume that you can discretize the energy levels. Is this a valid assumption? Would a continuous approach be possible?
2) Why have you not evaluated the method on a real-world patient data set?
3) Would it be possible to extend the method to cone-beam CT or other CT acquisition protocols?
4) How would you make the optimization faster to tackle the weakness regarding computational time?
5) Why did you opt for ReLU and not SIREN activations, which are continuous, better differentiable, and have shown improved generalization capabilities?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors mention a sufficient amount of limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We value the time and effort the reviewer dedicated to our work. It's genuinely uplifting to know that you consider our work "*reading this paper was a pleasure*". Below, we provide point-to-point responses to address your concerns.
---
**Q1. You assume that you can discretize the energy levels. Is this a valid assumption? Would a continuous approach be possible?**
**A1:** As expressed in Eq. 6, the estimated energy-dependent LAC maps are used to compute the projection data acquired with a polychromatic X-ray source. Theoretically, the energy of a polychromatic X-ray covers a continuous range. But in terms of the energy-dependent LAC maps, discretization is necessary due to the limitations of physical instruments and the trade-off between accuracy and computational costs in measurement. Hence, discretizing energy is a common and valid strategy in literature [1][2][3].
Through our ablation study, it is shown that enhancing the resolution of the energy spectrum improves the MAR performance. To achieve a continuous energy spectrum, leveraging another INR network to learn its continuous representation could be a feasible solution. We would like to extend our method in future work.
> [1] Punnoose, Jacob, et al. "spektr 3.0—A computational tool for x‐ray spectrum modeling and analysis." Medical physics (2016).
> [2] Boone, John M., and J. Anthony Seibert. "An accurate method for computer‐generating tungsten anode x‐ray spectra from 30 to 140 kV." Medical physics (1997).
> [3] Poludniowski, Gavin, et al. "SpekCalc: a program to calculate photon spectra from tungsten anode x-ray tubes." Physics in Medicine & Biology (2009).
---
**Q2. Why have you not evaluated the method on a real-world patient data set?**
**A2:** There are two factors that hinder our ability to work with clinical patient data:
- The raw projection data and geometry information usually are inaccessible on commercial clinical CT scanners.
- Using real-world patient data could raise ethical concerns, and it is crucial to handle such data responsibly.
In this preliminary work, our main objective is to present a novel unsupervised MAR method technically. In the future, we intend to conduct a comprehensive validation of our model using the clinical patient dataset, ensuring a thorough assessment of its performance and efficacy.
Here we additionally scan a mouse thigh containing a metal needle tubing on a commercial Bruker SKYSCAN 1276 micro-CT scanner (Note this data collection is approved ethically). The qualitative results are shown in Figure R1 in the PDF file of the Global Response. Visually, our method still obtains a good reconstruction. We thus believe the efficiency of Polyner has been demonstrated by the walnut and this mouse data with inserted metal.
---
**Q3. Would it be possible to extend the method to cone-beam CT or other CT acquisition protocols?**
**A3:** See Q2 in the Global Responses, please.
---
**Q4. How would you make the optimization faster to tackle the weakness regarding computational time?**
**A4:** See Q1 in the Global Responses, please.
---
**Q5. Why did you opt for ReLU and not SIREN activations, which are continuous, better differentiable, and have shown improved generalization capabilities?**
**A5:** In our Polyner implementation, we leverage hash encoding to enhance the fitting capacity of the MLP network (the experiments about its effectiveness are provided in Q1 of the response to Reviewer ibgY). SIREN is also an effective INR architecture where a good continuous representation can be obtained.
However, there are two main reasons we do not use SIREN activations in our Polyner:
- An excellent performance can be achieved by the hash encoding with a simple MLP.
- Using SIREN activation to replace ReLU will introduce an extra hyper-parameter $\omega_0$ (See [4] for more details). This might degrade the robustness of our model.
Considering these factors, we use hash encoding and a simple MLP in our Polyner implementation, as it fulfills our requirements effectively. However, we also believe that exploring a more effective INR architecture is a straightforward and promising way to improve the MAR performance further.
> [4] Sitzmann, Vincent, et al. "Implicit neural representations with periodic activation functions." NeurIPS. 2020.
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and point-by-point responses from the authors.
Global responses: Thanks for including an example of the 3D cone-beam reconstruction in the rebuttal. I understand that you prefer to mainly present the method in 2D, but I was wondering if the cone-beam results will be part of the paper or if you are going to leave them out?
Q1) Ok, clear.
Q2) I know that this info is usually not available, and that ethical concerns should be handled responsibly, but I don’t think it is that hard to acquire such data and handle it accordingly. Even one scan would help here. Will the mouse be included in the manuscript?
Q3) Ok.
Q4) Ok.
Q5) Ok.
---
Reply to Comment 1.1.1:
Comment: Dear QLvh,
Thanks for your prompt response!
We fully agree with you that experimenting with real patient data can enhance the reliability of the model. We plan to validate the model comprehensively using clinical data in our future work. Additionally, we will incorporate the cone-beam reconstruction results for the mouse thigh in the revised submission.
Best regards,
Authors | Summary: This paper introduces Polyner, an extension of implicit neural representation to a nonlinear inverse problem with a forward model that simulates the polychromatic nonlinear CT acquisition process. This design allows Polyner to reduce metal artifacts without external training data and exhibits better generalization to out-of-domain (OOD) data. Experimental results showcase improvements of Polyner on OOD datasets and clinical data while being competitive with state-of-the-art supervised methods.
Strengths: 1. The motivation is technically sound, and the proposed unsupervised method is interesting compared to the previous supervised paradigm. Taking into account the important knowledge of CT is reasonable.
2. The performance on OOD dataset and clinical images demonstrates the OOD capacity, though the quantitative evaluation (see weakness) is unfair. The ablation study confirms the effectiveness of the polychromatic CT forward model and the proposed loss function.
3. The paper is well-written and easy to follow. Experimental details are provided in supplementary materials with the open-sourced code.
Weaknesses: 1. While the author claims that "Polyner is superior to its supervised counterparts," this is not sufficiently supported by the experimental results and may be caused by unfair comparison. Directly generalizing pre-trained models to a different size (e.g., 256x256) for testing might not yield optimal results due to variations in noise levels and geometry settings. A fair comparison should re-train or fine-tune these methods to fit the specific geometry settings.
2. The suboptimal quantitative and visual results of ACDNet and DICDNet, in contrast to their original papers and CNNMAR, might be due to differences in geometry settings. It is necessary to clarify whether the supervised models were fine-tuned on the IID setting of the DeepLesion dataset. If not, please provide fine-tuned results (excluding OOD datasets) to ensure an accurate performance evaluation.
3. The color representation of CT images impedes readability. Presenting grayscale results with an appropriate window to facilitate evaluation is more reasonable. The goal of MAR is to remove artifacts **and** preserve diagnosis ability.
4. Please compare the proposed method with the state-of-the-art dual-domain methods published in 2022 or 2023. Additionally, discuss the potential of the proposed method in enhancing traditional dual-domain methods.
5. Since the proposed method is case-specific and requires optimization for each slice, it is essential to compare iterative reconstruction methods optimized in a similar manner. Moreover, reporting the computational time for a single scan will provide meaningful comparisons.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. It is important to clarify whether the DL methods (excluding CNNMAR) were fine-tuned on your dataset to match the geometry settings, following the training procedures described in their papers. If not, please provide the fine-tuned results for evaluation.
2. Please specify whether the PSNR and SSIM metrics were calculated based on specific windowing or attenuation coefficients.
3. Suggest changing the color scheme of the visual results. While it may be suitable for the ablation study to highlight details, grayscale images are more widely accepted in the field of CT imaging.
4. Sinogram inpainting-based methods should be reproduced and compared, which can show the advantages of the proposed polychromatic model.
5. Some statements are not accurate. (a) L36“..remove metal-affected extreme value signals....” Existing methods used different methods to replace these values. (b) L41 “collecting a large number of artifacts-free CT images...”, collecting such a dataset is easy. Instead, collecting a paired dataset is challenging. (c) In the caption of Fig. 3, swap left and right.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: In addition to these limitations in the paper, there is another limitation: When coming to 3D reconstruction, the proposed methods may produce discontinuous z-axis since it is a case-specific method for each slice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your efforts and valuable comments. Below, we provide point-to-point responses to address your concerns.
---
**Q1. Clarify whether the supervised ACDNet and DICDNet models were fine-tuned on the IID setting of the DeepLesion dataset**
**A1:** The two supervised methods (ACDNet and DICDNet) and one self-supervised method (ADN) are pre-trained on the DeepLesion dataset. However, the size of CT images for the first two is 416×416, but 256×256 for the last one. In our experiments, we uniformly test them on the DeepLesion dataset consisting of images of 256×256 size, called as DeepLesion (256), without conducting any fine-tuning. We acknowledge that this might slightly degrade the performance of ACDNet and DICDNet, weakling our claim “*Polyner is superior to its supervised counterparts*.”
To address this concern, we construct another DeepLesion dataset consisting of images of 416×416 size, called as DeepLesion (416), *fully following the geometry settings reported in the original papers of ACDNet and DICDNet*. Then, we compare them with our Polyner on the new DeepLesion (416). Note that all the hyper-parameters for optimizing our Polyner are the same as in our original submission, which demonstrates its robustness.
We show the quantitative results in Table R3. There are three observations:
- The performances of ACDNet and DICDNet are similar to their original reports on the new DeepLesion (416). For example, ACDNet achieves 40.68 dB and 40.91 dB of PSNR in the original paper and DeepLesion (416), respectively. We thus hold the performance concern caused by geometry settings is resolved.
- Compared with the DeepLesion (256), the three methods all achieve higher performance on the DeepLesion (416). The reason is the number of projections is 360 and 640 for the DeepLesion datasets (256) and (416), respectively. Increasing the projections can improve reconstruction accuracy.
- Our method achieves the best reconstruction performance quantitatively. For instance, PSNR improves by 1.04 dB and 1.27 dB compared with ACDNet and DICDNet, respectively.
We show the qualitative results in Figure R3 in the PDF file of the Global Response. The result of our Polyner is closest to the GT sample. This experiment shows our Polyner can perform compared to the supervised methods for the IID settings of in-domain datasets.
We will add this experiment for geometry settings into the revised manuscript for a more comprehensive evaluation.
| |DeepLesion (256)|DeepLesion (256)| DeepLesion (416)|DeepLesion (416)|
| :---: |:---: |:---: |:---: |:---: |
||PSNR|SSIM|PSNR|SSIM|
|DICDNet| 37.55±2.52|0.9689±0.0116| 40.68±1.79|0.9786±0.0061|
|ACDNet| **38.19±2.54**|0.9675±0.0152| 40.91±2.23|0.9753±0.0120|
|Polyner (Ours)| 37.57±1.93|**0.9754±0.0083**| **41.95±1.68**|**0.9829±0.0052**|
*Table R3: Quantitative results of DICDNet, ACDNet, and our Polyer on the two versions of DeepLesion datasets.*
---
**Q2. How to compute PSNR and SSIM metrics?**
**A2:** The two metrics are calculated based on the attenuation coefficients.
---
**Q3. Suggest changing the color scheme of the visual results.**
**A3:** In the original submission, we use a color scheme to prominently visualize the differences among reconstructed images. However, we fully agree with Reviewer oo65 that the grayscale scheme is more standard and widely used in CT imaging. To enhance readability, we will replace the color images with a grayscale version in the revised manuscript.
---
**Q4. Supervised dual-domain methods should be compared.**
**A4:** We choose not to include dual-domain MAR methods for two primary reasons:
- Based on the experimental results reported by the original papers, the SOTA methods ACDNet and DICDNet outperform the well-known dual-domain MAR methods, such as DuDoNet and DuDoNet++.
- The official code and pre-trained models of the majority of dual-domain methods are not available, making a reliable comparison challenging.
Taking these factors into account, we have decided to use ACDNet and DICDNet as the main deep learning-based comparison methods for our study.
---
**Q5. Some statements are not accurate.**
**A5:** We appreciate the reviewer’s patient proofreading. We will carefully modify these inaccurate statements and typos in the revised submission.
---
**Q6. For 3D reconstruction, Polyner may produce discontinuous z-axis.**
**A6:** As shown in Figure R1 in the PDF file of the Global Responses, we have developed a 3D cone-beam version of Polyner and successfully used it to reconstruct a 3D image of a mouse. The extension of Polyner to support 3D CT imaging is straightforward by modifying the scanning geometry used for simulating the X-rays, and the superior MAR performance is stable. The image continuity along the z-axis is improved in the 3D implementation result.
We present Polyner in the 2D version to ease a broader comparison with existing MAR works. Due to the training efficiency and memory footprint considerations, most deep-learning MAR models are 2D. A 3D version could limit the fairness of our comparison.
---
**Q7. Comparison with iterative methods and computational time.**
**A7:** Thank you for the suggestion. We have added the algebraic reconstruction technique (ART), a classical iterative CT imaging method, as additional baselines. Table R4 displays the quantitative results. Two observations are evident:
- As iterative methods, both ART and our Polyner are markedly slower than the analytical FBP algorithm.
- Our Polyner achieves significant improvements compared to the two conventional methods.
We will include this comparison in the revised manuscript.
||PSNR|SSIM|Average Time|
| :---:|:---:|:---:|:---:|
|FBP |29.17±3.30| 0.7231±0.0998|**0.06 s**|
|ART |32.88±3.63| 0.8352±0.0701|180 s|
|Poylner (Ours)|**37.57±1.93**|**0.9754±0.0083**|121 s|
*Table R4. Quantitative results of FBP, ART, and our Polyner on the DeepLesion dataset. Note that ART and Polyner are all iterative methods.*
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their additional results and detailed responses. However, the following concerns remain unsolved.
1. The authors clarified that "the two metrics are calculated based on the attenuation coefficients." A typical way is to select a specific CT window for metric calculation and comparison. For example, in Dudonet++, the metrics are evaluated in a window of [-175, +275] HU. A larger window leads to a higher PSNR value. This work used attenuation coefficients for metrics calculation, corresponding to the largest CT window.
2. This work used the trained models from the original literature for comparison. However, these models may use different training sets (patients), simulation parameters, preprocessing, and CT windows for training. Such factors could cause unfairness. Referring to both Figures 2 and 3 in the global response, it is evident that both ACDNet and DICDNet introduce noticeable artifacts, especially adjacent to the left metal. Such effects should not be present in a well-trained model. Given the source codes and checkpoints, why not re-train the model and compare the trained models with the checkpoints provided by the authors?
3. The proposed method requires 2 mins to reconstruct one slice, which may be far from clinical applications. One scan (>100 slices) should be done in less than 1 min.
In summary, the present work is interesting, in line with other reviewers. I believe a fair experimental comparison could make this work much better. Therefore, I would keep the previous rating but don't mind this paper being accepted.
---
Reply to Comment 1.1.1:
Comment: Dear oo65,
Thank you for your reply and additional suggestions!
Below are our point-by-point responses to your concerns.
---
**A1:** When using an HU window to compute PSNR, the differences among the intensities beyond the window are not being considered. In DuDoNet++ [1], a window of [-175, +275] HU is selected to emphasize the intensity differences among soft tissues and bone (e.g., fat [-100, -90]; liver [50, 70]; kidney [20-40]; cancellous bone [50, 200]). However, it ignores the HU levels for air (-1000 ± 10), tooth enamel [800, 1200], and compact bone [250, 2500]. In our test data, we compare CT MAR reconstruction in abdominal, dental, lung, and brain imaging, where the HU value of tooth enamel, air (intestinal lumen), pulmonary air, skull, and rib all are beyond this range. Therefore, we compute the PSNR metric on the raw attenuation coefficients. We hold the fairness for our comparison.
Here, we also fully follow the scheme reported in DuDoNet++ [1] to compute the PSNR metric. Table R6 presents the quantitative results. The performance of all methods slightly decreases but remains satisfactory. Moreover, our Polyner produces comparable performance to the supervised ACDNet and DICDNet, which is consistent with our current evaluation.
| |DeepLesion (256)-AC| DeepLesion (256)-HU| DeepLesion (416)-AC| DeepLesion (416)-HU|
|:---:|:---:|:---:|:---:|:---:|
|DICDNet |37.55±2.52 |36.01±2.07| 40.68±1.79 |37.55±1.84|
|ACDNet| **38.19±2.54** |**37.00±2.13**| 40.91±2.23 |37.94±2.57|
|Polyner (Ours)| 37.57±1.93| 36.25±1.88| **41.95±1.68**| **38.68±1.32**|
*Table R6: Quantitative results of DICDNet, ACDNet, and our Polyner on the two versions of the DeepLesion datasets by using two computational strategies. AC denotes PSNR metrics based on the raw attenuation coefficients, while HU represents PSNR metrics based on a window of [-175, +275] HU, which is consistent with the approach used in DuDoNet++ [1].*
> [1] Lyu, Yuanyuan, et al. "Dudonet++: Encoding mask projection to reduce ct metal artifacts." arXiv preprint arXiv:2001.00340 (2020).
---
**A2:** The supervised ACDNet and DICDNet models are trained on the DeepLesion dataset. In Table R3, we present their quantitative results on the two versions of the DeepLesion datasets (Please note that DeepLesion (416) is constructed by fully following the geometry settings in the original papers). Our obtained quantitative performance of ACDNet and DICDNet closely aligns with the results reported in the original papers. Therefore, we believe a fair comparison is guaranteed.
Furthermore, the results of ACDNet and DICDNet exhibit noticeable artifacts. The primary reason could be the diversity of the test samples. The MAR is a very challenging problem, and since ACDNet and DICDNet are supervised methods, it is common for such methods to experience drops in performance due to challenges in generalization.
Finally, we maintain that utilizing checkpoints provided by the authors can ensure a relatively fair comparison, as training supervised models from scratch necessitates the selection of numerous hyperparameters. Consequently, we assess these baselines using the pre-trained checkpoints.
---
**A3:** We agree with Reviewer oo65 that optimization speed is a limitation of our current Polyner. However, we believe we are discussing more advanced trends for solving the MAR task rather than providing a direct clinical solution. When needed, the optimization speed can be significantly improved by combining metal-learning [2] to learn a fine initialization (For more details, please refer to Q1 of the Global Response).
> [2] Tancik, Matthew, et al. "Learned initializations for optimizing coordinate-based neural representations." CVPR. 2021.
---
We hope our responses above can solve your concerns. If you have any additional questions, we eagerly look forward to engaging in further discussions.
Best regards,
Authors | Summary: The paper with title: Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction presents an Implicit neural representation-based method for CT metal artifacts reduction, outperforming existing supervised and unsupervised approaches.
Strengths: 1. This paper presents a novel INR-based method for CT metal artifacts reduction.
2. The authors present a non-linear forward model to model the metal artifacts, and leverages it as signal domain loss function.
3. I appreciate the real-scan results which demonstrate the results on prospective corrupted dataset.
Weaknesses: 1. Regarding the results for the real-data - I noticed that for real-data, the gap or improvements of Polyner is not as significant as the simulated dataset, ACDNet is also doing a decent job, could you elaborate on this?
2. Regarding the forward model, from the ablation studies, including the non-linear components contribute to the improved performance, however, the forward model is still not perfect without considering some system attributes. Can you elaborate on this aspect?
3. lack of quantitative results on real-data.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: I have put my questions in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: 1. The inference time is long if given limited performance improvements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work. We are pleased to receive your positive feedback. Below, we provide point-to-point responses to address your concerns.
---
**Q1. Regarding the results for the real-data - I noticed that for real-data, the gap or improvements of Polyner is not as significant as the simulated dataset, ACDNet is also doing a decent job, could you elaborate on this?**
**A1:** For MAR methods, the performance is significantly influenced by the volume and shape of the metallic implant. In our real-data scanning, the metallic object exhibits a compact structure with a volume much smaller than those in the simulation data. The metal streaking artifacts are narrow and are not as complicated as that in the simulation data, as shown in the reconstruction result by FBP in Figure 3, while ACDNet takes the images by FBP as one of the inputs. Therefore, the reconstruction results of ACDNet are also acceptable.
However, residual streaking artifacts are still observed by the results of ACDNet, and the signal intensities obviously deviate from the reference image. In contrast, our Polyner provides high-fidelity reconstruction with neglectable streaking artifacts.
---
**Q2. Regarding the forward model, from the ablation studies, including the non-linear components contribute to the improved performance, however, the forward model is still not perfect without considering some system attributes. Can you elaborate on this aspect?**
**A2:** CT acquisition is a complicated process with various applications. Polyner is primarily proposed for medical CT, the most widely beneficial application among all application scenarios. In medical CT, commercial scanners adhere to guidelines for biological CT scans, characterized by controlled conditions: normal-dose X-rays, full projection views, exposure time, standard SNR, well-calibrated energy sources, and signal sensors. In such a setup, we propose our forward model primarily focuses on simulating the nonlinear effects encountered when scanning a human body with metal implants. We hold that in a relatively stable environment, the nonlinear forward model covers the most critical contributors in MAR CT reconstruction. The descent performance on real data also proves the efficiency of the forward model.
Moreover, our Polyner model can easily be extended to tackle more complicated CT scanning situations. For instance, we could incorporate explicit regularization terms such as total variation into the loss function (Eq. 9) to compensate for the low SNR data acquisition in low dose CT condition [1]; we could combine a reprojection strategy for sparse view CT condition [2]; we could jointly optimize the INR network and a transformation matrix when the scanner sensor calibration is not perfect to correct motion between each projection [3].
> [1] Zang, Guangming, et al. "IntraTomo: self-supervised learning-based tomography via sinogram synthesis and prediction." CVPR. 2021.
> [2] Wu, Qing, et al. "Self-supervised coordinate projection network for sparse-view computed tomography." IEEE TCI (2023).
> [3] Wang, Zirui, et al. "NeRF--: Neural radiance fields without known camera parameters." arXiv preprint arXiv:2102.07064 (2021).
---
**Q3. Lack of quantitative results on real-data.**
**A3:** In our real-data experiment, we only provide qualitative results. The main reason is that the reference CT image cannot be completely considered a ground truth due to non-rigid deformation caused by the insertion of the metal paper clip. This misalignment will affect the accuracy of quantitative metrics. Here, we calculate the quantitative results in Table R2. Our Polyner still performs the best. We will add these quantitative results to the supplementary material in the revised manuscript.
| |FBP| LI| ACDNet| DIDCNet| Polyner (Ours)|
| :---: |:---: |:---: |:---: |:---: |:---: |
|PSNR| 20.66| 21.94| 21.53| 21.61| **22.00**|
|SSIM| 0.3623| 0.4853| 0.3592| 0.3562| **0.5484**|
*Table R2: Quantitative results of the compared methods on the real collected walnut sample.*
---
**Q4. The inference time is long if given limited performance improvements.**
**A4:** See Q1 in the Global Response, please. And we would like to emphasize our Polyner contributes the most in proposing a fully unsupervised MAR method. At the same time, the performance is superior compared with existing supervised MAR methods.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for the thoughtful responses, I looked through other reviews and it answers a large portion of my concerns.
Regarding Q1: you mentioned: In our real-data scanning, the metallic object exhibits a compact structure with a volume much smaller than those in the simulation data. Just wanna check if I understood correctly, you were saying since the metal is smaller, where the improvements are less significant. I wonder if you simulate with a smaller object, what will be the results? Or you perform the real-scan with a larger metal. Please follow up with this question.
Regarding Q2: That's fair, thanks for your response!
Regarding Q3: Sounds good.
---
Reply to Comment 1.1.1:
Comment: Dear KPnC,
Thanks for your prompt reply!
The fact aligns exactly with your understanding, i.e., the improvements by our model are more significant for larger-sized metals.
In our experiments on the simulation DeepLesion, we evaluate model performance on ten different sizes of metals. The original submission reports average results for these sizes. Here, we additionally show three different sizes of metals in Table R5. The results reveal that our Polyner exhibits more pronounced advantages for larger-sized metals. We will add the related discussion in the revised submission.
|Metal Size|#Metal's Pixels|ACDNet|Polyner (Ours)|Improvements|
|:---:|:---:|:---:|:---:|:---:|
|Small|197|42.73 dB|42.47 dB|-0.26 dB|
|Medium|1160| 41.84 dB| 42.00 dB| +0.16 dB|
|Large|3260|38.31 dB|40.78 dB|+2.47 dB|
*Table R5: Quantitative results of ACDNet and our Poylner for three different sizes of metals on the DeepLesion (416) dataset.*
Best regards,
Authors | Summary: The paper proposes a new implicit neural representation (INR)-based polychromatic x-ray CT reconstruction technique called Polyner. In normal CT reconstruction, the tissues of the body do not vary substantially in their attenuation coefficients, so the overall forward operator can be simply modeled as linear. This changes in the presence of metal implants, because the attenuation of metal implants depends on the energy of the input X-rays, giving a polychromatic attenuation response. Several methods have been proposed for polychromatic CT reconstruction, but these methods often manifest as inpainting methods that suffer from out-of-distribution generalization issues.
The present paper addresses this issue by using a NeRF-like model for estimating the attenuation coefficients in the body. The model is optimized by passing the predicted attenuations through a forward integral model and minimizing the distortion vs. the measurements. The forward model is adapted for metal vs. non-metal regions via mask.
Results presented in the paper are promising: the proposed Polyner method is competitive with past methods on in-domain test datasets in terms of PSNR and SSIM metrics on the DeepLesion and XCOM datasets. Competing methods have wide variances in performance among the two datasets, whereas Polyner is more stable. On a new CT dataset consisting of a walnut scan with a metal paperclip, where Polyner further establishes is strong robustness. In ablations, the paper establishes the importance of decisions taken with respect to the forward model and loss functions.
Strengths: **Originality**
- The paper presents a novel approach for the metal artifact reduction problem. Although implicit neural representations have been used for previous CT reconstruction tasks, the metal artifact reduction problem is substantially different in terms of the physics under consideration and the artifacts that can manifest.
**Quality**
- The paper includes all the expected experiments, examining multiple simulation datasets as well as new real-world data, providing strong experimental validation of the proposed method.
- Comparison methods seem up to date, including methods as recent as 2022.
**Clarity**
- The paper is clear in its motivations for the metal artifact reduction problem and the justification for the INR approach.
**Significance**
- Providing a robust reconstruction method for CT with metal artifacts is a long-standing goal of the CT field, and it seems that this paper makes a god contribution towards this goal.
Weaknesses: **Originality**
- The presence of previous INR-style methods in CT arguably reduces the originality of the current paper, but in my opinion not in a significant manner.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Most ablations consider loss functions and forward models - did the authors consider modifications to the INR architecture, or is this not an area with large expected benefits?
2. Please specify more details on the binary mask - it was not clear to me how this is calculated or used from the text.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are listed with the discussion. I normally prefer a separate listing, but I do not have a major objection to this presentation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank the reviewer for the valuable comments. We are encouraged by your recognition of our work. Below, we provide point-to-point responses to address your concerns.
---
**Q1. Most ablations consider loss functions and forward models - did the authors consider modifications to the INR architecture, or is this not an area with large expected benefits?**
**A1.** The INR architectures usually consist of an encoding module and an MLP network. The encoding module transforms low-dimensional coordinates into high-dimensional embeddings, significantly enhancing the fitting capacity of the subsequent MLP network. In our Polyner model, we employ the SOTA hash encoding for embedding, which has become a standard module in many INR-based approaches [1]. Here, we compare it with position encoding [2] and Fourier encoding [3] on the DeepLesion dataset. From Table R1, we can see that hash encoding achieves the best performance in terms of both reconstruction quality and speed. Qualitative results are shown in Figure R2 of the PDF file of the Global Response.
We will include this ablation study for the INR architecture in the supplementary material.
In summary, the INR architectures do impact the model performance, particularly in terms of optimization speed. Hence, adopting a more powerful architecture can significantly improve the proposed Polyner model.
| |Position Encoding| Fourier Encoding| Hash Encoding|
| :---:| :---:| :---:| :---:|
|PSNR |34.55±1.64| 35.07±1.42| **36.87±1.56**|
|Average Time |10 min.| 8 min.| **2 min.**|
*Table R1: Qualitative results of our Polyner implemented by three different INR architectures on the DeepLesion dataset.*
> [1] Tewari, Ayush, et al. "Advances in neural rendering." Computer Graphics Forum. 2022.
> [2] Mildenhall, B., et al. "Nerf: Representing scenes as neural radiance fields for view synthesis." ECCV. 2020.
> [3] Tancik, Matthew, et al. "Fourier features let networks learn high frequency functions in low dimensional domains." NeurIPS. 2020.
---
**Q2. Please specify more details on the binary mask - it was not clear to me how this is calculated or used from the text.**
**A2.** Due to the substantial distinctions in the linear attenuation coefficients (LACs) between metallic substances and biological tissue, we applied a straightforward threshold segmentation to the CT images reconstructed by FBP to generate these binary metal masks.
In the EDS regularization term (Eq. 8), we use these masks to distinguish between metal and body regions and thus can enforce a smooth prior along the energy spectrum for the body area. We will include these details in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their reply. All of my questions are addressed. I have also read the other reviews. I thought there were some good comments by Reviewer oo65, but these were well-addressed by the authors in their rebuttal. I continue to hold my rating of 7.
---
Reply to Comment 1.1.1:
Title: Thank you for your efforts
Comment: Dear ibgY,
Thank you for your prompt response!
We are delighted to learn that your concerns have been addressed. Your constructive comments have truly enhanced our work. Once again, thank you for your efforts!
Best regards,
Authors | Rebuttal 1:
Rebuttal: ### **Global Responses**
We sincerely thank the reviewers for their insightful comments and suggestions!
We are encouraged by the reviewers' recognition of the novelty (ibgY, KPnC, QLvh), strong motivation (ibgY, oo65, QLvh), technical (oo65, QLvh) and theoretical (QLvh) soundness of our research. Their positive remarks on the anticipated experiments (ibgY, KPnC, QLvh), high performance (ibgY, KPnC, oo65, QLvh), and the clarity of our writing (ibgY, oo65, QLvh) are also highly appreciated.
Here we address two common concerns raised by multiple reviewers. Subsequently, we provide detailed responses to the individual feedback provided by each reviewer.
---
**Q1. Acceleration of optimization process (KPnC, oo65, QLvh)**
**A1:** Polyner uses implicit neural representation (INR) as its core architecture, and in light of the recent advancements in INR, there have been a few methods to speed up image reconstruction during the last two years. In our work, the current Polyner takes about 2 minutes to reconstruct a 2D CT image of 256×256 on a single NVIDIA RTX TITAN GPU.
There are three possible solutions to further accelerate the reconstruction:
- *Enhanced encoding schemes*. While the hash encoding used in our Polyner implementation currently represents the state-of-the-art (SOTA), exploring more powerful encoding schemes is a straightforward and promising approach.
- *Meta-learning*. An alternative solution involves using meta-learning to facilitate a superior initialization for our Polyner model. Empirically, Tancik et al. [1] demonstrated that meta-learning can significantly accelerate the optimization of INR networks across diverse tasks, including image fitting, sparse-view CT reconstruction, and novel view synthesis. However, this meta-learning-based solution requires an external dataset consisting of numerous artifact-free CT images, which might limit its applicability in certain rare cases.
- *Specialized acceleration chips*. The proposed Polyner is a deep learning-based method and is training based on general-purpose GPUs. Nevertheless, we could use some specialized acceleration chips to significantly acceleration the optimization.
> [1] Tancik, Matthew, et al. "Learned initializations for optimizing coordinate-based neural representations." CVPR. 2021.
---
**Q2. Extension of our Polyner model to 3D CT imaging (oo65, QLvh)**
**A2:** Our current Polyner model is presented based on 2D fan beam CT. And it can be extended easily for advanced CT acquisition protocols. For instance, in the case of 3D cone-beam settings, only a simple modification following 3D acquisition geometry for X-ray simulation needs to be conducted, while all other steps remain unchanged. One benefit of 3D reconstruction could be the local consistency along the z-axis in the reconstructed image. In addition, there is a potential problem in 3D CT images: The INR network requires a more powerful capacity to represent 3D objects, especially for large image dimensions, which may intensify computational costs, such as memory footprint.
As shown in Figure R1 in the PDF file of the Global Responses, we implement a 3D cone-beam version of the Polyner on a mouse CT scan conducted with a commercial Bruker SKYSCAN 1276 micro-CT scanner (Note this data collection is approved ethically). Our 3D Polyner model was employed to reconstruct a mouse thigh region with a size of 200×200×150. The enlarged figures in Figure R1 illustrate the mouse thigh regions containing metal needle tubing. The whole optimization process requires approximately 10 GB of memory and takes around 32 minutes when executed on a single NVIDIA RTX TITAN GPU. Visually, our method still obtains a good reconstruction, as shown in Figure R1.
Besides, we prefer to present Polyner in the current 2D fan beam version to facilitate a broader comparison with the recent deep learning-based MAR methods. Due to the training efficiency, most deep-learning MAR models are 2D. A 3D version could limit the fairness of our comparison.
Pdf: /pdf/9959f12f826194ce3f380cf90c4c76c0c443c206.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Personalized Dictionary Learning for Heterogeneous Datasets | Accept (poster) | Summary: This paper tackles the problem of personalized dictionary learning (PerDL) for heterogeneous datasets that share some commonality. The authors propose a federated meta-algorithm called PerMA that can provably recover both global and local dictionaries from heterogeneous datasets. They show the applications of PerDL in different learning tasks, such as training with imbalanced datasets and video surveillance.
--post rebuttal
After reading the rebuttal, I would like to keep my score.
Strengths: • The paper proposes a federated meta-algorithm called PerMA that can provably recover both global and local dictionaries from heterogeneous datasets.
• The paper provides theoretical guarantees on the identifiability, convergence, and robustness of PerMA, and demonstrates its applications in different learning tasks. The result on the Surveillance Video Dataset looks interesting.
Weaknesses: • The paper does not compare PerMA with other existing methods for dictionary learning or federated learning, such as personalized PCA (PerPCA), which would be helpful to evaluate its performance and advantages.
• In Surveillance Video experiments, why there is only one original image and one local atom? The author is encouraged to give some explanation. Moreover, only one case study seems insufficient.
• Federated Learning is a long studying problem so far, the author is needed to further highlight the difference between their proposed ``global matching and local update” with existing methods.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: My biggest concern is about the insufficient experiments. This paper lacks a comparison between existing dictionary learning methods. Moreover, there is no actual numerical result in experiments. I’d like to see more experiments during the rebuttal.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors truly appreciate the reviewer's insightful comments.
> The paper does not compare PerMA with other existing methods for dictionary learning or federated learning, such as personalized PCA (PerPCA), which would be helpful to evaluate its performance and advantages.
We kindly refer the reviewer to our general response, where we validate the effectiveness and efficiency of our method compared to various state-of-the-art methods (including PerPCA). We will add these new experiments to our revised paper.
>In Surveillance Video experiments, why there is only one original image and one local atom? The author is encouraged to give some explanation. Moreover, only one case study seems insufficient.
There seems to be a misunderstanding in our experimental setup, and we apologize for any confusion caused. To clarify, there is indeed more than one local and global atom corresponding to each frame. During our training process, there are in total $61$ different images that share common backgrounds but have individually unique components (cars). For each frame, we learned 30 global atoms and 546 local atoms. The pictures on the second and third columns are reconstructions of the original images using only 50 atoms of the learned dictionary. We apologize for the lack of clarity in our explanation, which will be addressed in the revised paper.
Regarding the reviewer's claim about the numerical section, we want to clarify that our work indeed includes three case studies, and two of these case studies specifically pertain to real-life applications in image and video processing.
> Federated Learning is a long studying problem so far, the author is needed to further highlight the difference between their proposed ``global matching and local update” with existing methods.
We express our gratitude to the reviewer for raising this important comment. It is crucial to emphasize that, to the best of our knowledge, the problem of personalized dictionary learning (PerDL) has not been previously explored and formally defined. While existing methods in federated learning bear some resemblance to PerDL, they are not directly applicable to address this problem with provable guarantees. To clarify these distinctions, we present a detailed comparison between our work and the most closely related papers by *Huang et al. (2022)* and *Gkillas et al.(2022)*.
Firstly, we rigorously define the problem of personalized dictionary learning in our paper, and our proposed method PerMA is explicitly tailored to extract commonality while preserving heterogeneity across clients. In *Huang et al. (2022)*, no personalization is considered, and in *Gkillas et al.(2022)*, their solution to personalization involves merely broadcasting frequently used atoms without a comprehensive approach to address the underlying challenges.
Secondly, a critical aspect in solving PerDL is the alignment of atoms from different clients, and we address this challenge through our novel graph-based method. In contrast, *Huang et al. (2022)* proposes a brute-force approach for atom alignment, while *Gkillas et al.(2022)* does not consider this important aspect at all.
Thirdly, existing works on federated learning, including the two papers mentioned above, have a broader scope and lack specific guarantees on PerDL. In contrast, our meta-algorithm is explicitly designed to tackle PerDL and is equipped with provable guarantees, ensuring its efficacy and effectiveness in personalized dictionary learning scenarios.
Finally, we kindly refer the reviewer to our general response, where we provide a detailed comparison between the performance of our method and these methods.
### Reference
Gkillas, A., Ampeliotis, D., and Berberidis, K. (2022). Federated dictionary learning from non-iid data. In 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop(IVMSP), pages 1–5. IEEE.
Huang, K., Liu, X., Li, F., Yang, C., Kaynak, O., and Huang, T. (2022). A federated dictionary learning method for process monitoring with industrial applications. IEEE Transactions on Artificial Intelligence. | Summary: This work tackles the problem of heterogeneity in federated learning through dictionary learning. The authors name this problem _Personalized Dictionary Learning_ (PerDL), which seeks to learn (linear) representations for the heterogeneous datasets from clients, which are supposed to share common characteristics. The insight of their approach is that PerDL will disentangle global/general and local/unique features from clients via DL. The authors provide a thorough analysis of convergence of their method, as well as a federated strategy for learning the dictionaries (PerMA).
__Post-Rebuttal Acknowledgment__
I have read the authors rebuttal, and other reviewers comments. In their rebuttal, the authors fully addressed my concerns, and provided important results and discussion, including further discussion on the ethical impacts of their work. Overall, I think the authors rebuttal greatly improves the quality of their submission. As a result, I increased my score to 6. Weak Accept.
Strengths: __Originality.__ The authors present an original and theoretically grounded work for dictionary learning in a federated setting.
__Quality and Clarity.__ The paper is well written and clear. Assumptions are clearly stated and the concepts behind theorems are clearly defined. Analyzing the abstract alone, the federated aspect of this work is not clear (see W1 and S4 below).
__Significance.__ While I consider the authors contribution novel, the fact that in the paper only qualitative results are shown, and that the authors do not draw comparisons with other SOTA methods hinders the significance of this work.
Weaknesses: Below I list a series of weaknesses of the current paper. Please see my suggestions in the next section on how to improve these points.
__Concerning Clarity__
__W1.__ Even though it is clear from the introduction, the abstract does not mention federated learning at all.
__Concerning the authors experiments__
__W2.__ Sections 5.2 only contain qualitative results (i.e., Figures 3 and 4). This hinders the assessment of their method, especially in comparison to other strategies.
__W3.__ The authors only provide comparisons with non-collaborative/non-federated approaches. A comparison with the methods of [Gkillas et al., 2022] and/or [Huang et al., 2022] would improve the impact of the authors results.
__Concerning Ethical Considerations__
__W4.__ Even though the authors method is not especifically tailored for video surveillance, I think that authors should provide a broader discussion on the ethical impacts of their experiment.
__Post Rebuttal__
In their rebuttal, the authors correctly addressed all of the above weaknesses. As a result I raised my score from 3. Reject towards 6. Weak Accept.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ## Suggestions
Here, authors may find a list of suggestions associated with the raised weaknesses.
__S1.__ Authors should include the federated learning motivation in their abstract.
__S2.__ Include quantitative results in Sections 5.2 and 5.3. Concerning image reconstruction, a table comparing PSNR, MSE or SSIM metrics of the different tested strategies would be intersting. The same metrics would apply to the data of individual clients. Concerning surveillance video, the authors could consider the F-score as done in [Cuevas et al., 2016, Table 7].
__S3.__ Compare their method against SOTA methods (e.g., [Gkillas et al., 2022] and/or [Huang et al., 2022])
__S4.__ Include a discussion about the possible impacts of the authors work in video surveillance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did not explicitly discuss the limitations of their work in the main paper. Some discussion is given in the appendix A.3, where the authors comment on the performance of their method when $N$, $d$ and $p$ are large. Furthermore, in section 3.1 the authors analyze the complexity of their method. I think the authors could contextualize this discussion with the overall challenges of large scale tasks (such as video surveillance).
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate the reviewer's helpful comments and detailed suggestions.
> W1/S1
We thank the reviewer for this comment and apologize for not mentioning federated learning in the abstract. Rest assured, we will address this issue by adding federated learning motivation to our abstract.
>W2/W3/S2/S3
We acknowledge and apologize for the insufficiency of comparisons and quantification in our numerical sections. We kindly refer the reviewer to our general response, where we validate the effectiveness and efficiency of our method compared to various SOTA methods (including those mentioned by the reviewer) and with quantifiable metrics. We will add these new experiments to our revised paper.
>W4/S4
We thank the reviewer for raising their concern regarding the ethical impact of our work. This concern has been further investigated by two additional ethics reviewers, and both indicated that neither our method nor our simulations raise any significant ethical issues.
Nonetheless, we plan to add the following discussion on the broader and social impact of our work:
"Our novel approach for personalized dictionary learning presents a versatile solution with immediate applications across various domains, such as video surveillance and object detection. While these applications offer valuable benefits, they also bring to the forefront ethical and societal concerns, particularly concerning privacy, bias, and potential misuse.
In the context of video surveillance, deploying object detection algorithms may inadvertently capture private information, leading to concerns about violating individuals' right to privacy. However, it is important to emphasize that our proposed algorithm is specifically designed to focus on separating unique and common features within the data, without delving into the realm of personal information. Thus, it adheres to ethical principles by ensuring that private data is not processed or used without explicit consent.
Bias is another critical aspect that necessitates careful consideration in the deployment of object detection algorithms. Biases can manifest in various forms, such as underrepresentation or misclassification of certain groups, leading to discriminatory outcomes. Our approach acknowledges the importance of mitigating biases by solely focusing on the distinction between common and unique features, rather than introducing any inherent bias into the learning process.
Furthermore, the potential misuse of object detection algorithms in unauthorized surveillance or invasive tracking scenarios raises valid concerns. As responsible researchers, we are cognizant of such risks and stress that our proposed algorithm is meant to be deployed in a controlled and legitimate manner, adhering to appropriate regulations and ethical guidelines."
> The authors did not explicitly discuss the limitations of their work in the main paper.
We thank the reviewer for their comment. To address it, we will add the following paragraph to the paper:
"Even though our meta-algorithm PerMA enjoys strong theoretical guarantees and practical performance, there are still several avenues for improvement. For instance, the theoretical success of PerMA, especially the Global Matching step, relies on an individual initial error of $O(1/N)$. In other words, the initial error should decrease as the number of clients grows. As a future work, we plan to relax such dependency via a more delicate analysis. We also note that imposing an upper bound on the initial error is not unique to our setting, as virtually all existing algorithms for classical (non-personalized) dictionary learning require certain conditions on the initial error. On the other hand, once the assumption on the initial error is satisfied, our meta-algorithm achieves a final error with the same dependency on $d$ (the dimensionality of the data) and $n$ (the number of samples) as the state-of-the-art algorithms for classical dictionary learning (*Agarwal et al. (2016)*, *Arora et al. (2015)*). Remarkably, this implies that personalization is achieved without incurring any additional cost on $d$ or $n$, making PerMA highly efficient and competitive in its performance."
### Reference
Agarwal, A., Anandkumar, A., Jain, P., and Netrapalli, P. (2016). Learning sparsely used overcomplete dictionaries via alternating minimization. SIAM Journal on Optimization, 26(4):2775–2799.
Arora, S., Ge, R., Ma, T., and Moitra, A. (2015). Simple, efficient, and neural algorithms for sparse coding. In Conference on learning theory, pages 113–149. PMLR.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed rebuttal, their additional experiments and the discussion on the ethical impact of their work. As a result of these elements I am raising my score from 3. Rejection towards 6. Weak Accept.
Overall, the authors correctly addressed the issues raised in my review. Especially, authors included new quantitative results, and compared their method to existing state-of-the-art. This significantly improves the significance of their experimental section. Furthermore, the authors included an important discussion on the ethical impacts of their experiments.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for carefully reading our rebuttal and raising the score. We would also like to thank the reviewer again for the helpful suggestions on improving our paper, especially the numerical section. We will include all the points raised during the rebuttal process in our final paper. | Summary: This paper studies the problem of personalized federated learning with each client conducting dictionary learning on heterogeneous tasks. This paper splits the learned dictionary into a global dictionary and local dictionaries. It provides the conditions that two types of dictionaries can be provably identified. It designs a federated meta-algorithm where clients only pass estimated global dictionaries to the center. It also provides the linear convergence for the federated learning procedure under some assumptions.
Strengths: 1. The objective function is concise for the described personalized dictionary learning problem.
2. It gives conditions when global dictionaries and local dictionaries can be identified. The two conditions satisfy the intuition.
3. The algorithm finds the global dictionaries given initial client dictionaries by finding the shortest path based on DAG, which is an interesting solution.
4. The overall algorithm is theoretically guaranteed.
5. The writing of the paper is good. The description of the algorithm is clear.
Weaknesses: The experiments are somewhat weak. Using each frame as a client is also strange in the third experiment.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please discuss the related techniques about graph methods when tackling the dictionary problem.
2. The dimension r_g of the global dictionary is the most important hyper-parameter of the algorithm. How to choose it in the experiments? How does the dimension r_g influence the algorithm? More discussions should be added.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: More experiments should be added. Authors could take more experiments on tabular data sets. It is better to test how the dimension of the global dictionary r_g influences the performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your insightful comments and suggestions.
> The experiments are somewhat weak.
We kindly refer the reviewer to our general response, where we have included numerical comparisons between our method and other existing methods in our global response. Our method indeed exhibits superior performances in all three case studies.
> Using each frame as a client is also strange in the third experiment.
We agree with the reviewer's observation that the utilization of each frame as a "client" is not a conventional approach. However, in this context, the definition of a "client" is somewhat artificial, driven by the ultimate objective of distinguishing common and individually distinctive elements, such as the background and the cars. In this framework, each "client" captures the unique features of a frame. It is worth noting that our definition of a client may also mirror real-world scenarios, where each client only has access to a limited number of frames that share local features (e.g., frames captured from a specific angle). Although such applications hold significance, they are not the primary focus of this paper.
> Please discuss the related techniques about graph methods when tackling the dictionary problem.
The authors thank the reviewer for this comment. On the theoretical studies of dictionary learning, graph-based methods have been used to provide early-stage estimations of the true dictionary, followed by classic alternating methods to achieve exact recovery. In *Arora et al.(2014)*, the authors provide a novel method for dictionary learning based on a connection graph, whose purpose is to detect whether or not two samples (or signals) share the same atom. Such an idea is further developed in *Agarwal et al.(2014)* and *Arora et al.(2015)*.
The graph-based method in this paper, however, is fundamentally different from the previous works. The main idea behind our approach is to detect the commonalities and similarities among the atoms by casting it as a series of shortest path problems over a synthetically generated directed acyclic graph (DAG). To the best of our knowledge, such an approach has not been used before for identifying the common features in dictionary learning. The closest to our proposed method is the Federated
Matched Averaging (FedMA) algorithm introduced by *Wang et al. (2020)* for the federated learning of neural network architectures. In this method, identifying a global model is cast as a series of assignment problems over bipartite graphs. This approach is a special case of our proposed graph-based method, where the number of layers is limited to two.
Finally, to solve the shortest path problems, we used the well-known labeling algorithm, which can solve the single source shortest path problem in linear time (with respect to the number of edges in the graph). This algorithm is already implemented in the built-in MATLAB function "shortestpath.m".
> The dimension $r_g$ of the global dictionary is the most important hyper-parameter of the algorithm. How to choose it in the experiments? How does the dimension $r_g$ influence the algorithm? More discussions should be added.
We sincerely thank the reviewer for this very insightful comment. As the reviewer correctly mentioned, $r_g$ is an important hyper-parameter of our algorithm. A larger $r_g$ means more global atoms are sent between central servers and clients, which leads to a stronger collaboration between them. In the synthetic experiment, we assume to know the value of $r_g$, while in real-life applications, one needs to fine-tune this parameter. An interesting observation we made, which was omitted due to limited space, is that the proposed PerMA algorithm can be augmented by a simple detection mechanism for identifying the correct choice of $r_g$. Specifically, during the Global Matching step, where we iteratively remove shortest paths, we can closely monitor the length of the obtained shortest path. By terminating the removal of paths (i.e., adding global atoms) when the path's length experiences a significant increase beyond a predefined threshold, we can effectively identify the appropriate value of $r_g$ without requiring prior knowledge. This detection mechanism alleviates the burden of fine-tuning $r_g$ and allows for a more practical and robust implementation of the algorithm.
To validate the efficacy of this approach, we conducted a series of experiments, the results of which are presented in Figure 3 in the pdf file. We use different $r_g = 4,6,8$ with $r=10$ and monitor the lengths of paths. As evident from the outcomes, a clear and drastic increase in the length of the $r_g+1$-th shortest path is observed, signifying the correct value of $r_g$.
We will add the new experiments to the revised manuscript.
### Reference
Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P., and Tandon, R. (2014). Learning sparsely used overcomplete dictionaries. In Conference on Learning Theory, pages 123–137. PMLR.
Arora, S., Ge, R., Ma, T., and Moitra, A. (2015). Simple, efficient, and neural algorithms for sparse coding. In Conference on learning theory, pages 113–149. PMLR.
Arora, S., Ge, R., and Moitra, A. (2014). New algorithms for learning incoherent and overcomplete dictionaries. In Conference on Learning Theory, pages 779–806. PMLR.
Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., and Khazaeni, Y. (2020). Federated learning with matched averaging. arXiv preprint arXiv:2002.06440.209
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed rebuttal. The discussion on the related techniques and the way to choose the important hyperparameter address my issueses.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our rebuttal is able to address your concern. We will integrate new experiments into our revised manuscript. | Summary: This paper proposed a challenging problem named Personalized Dictionary Learning (PerDL), which learned a shared global dictionary and individual local dictionary for heterogeneous datasets.
In order to investigate the feasibility of the problem, several definitions and assumptions are provided to make the theoretical guarantee. Under these conditions, a meta-algorithm called Personalized Matchina and Averaging (PerMA) is proposed to solve the problem. The convergence of PerMA is theoretically guaranteed.
Experiments are conducted on synthetic, imbalanced digits reconstruction and video surveillance datasets, which show the effectiveness of PerMA.
Strengths: (1). The problem is well-defined to ensure identifiability, feasibility, and convergence with the help of certain mild assumptions and definitions. This involves Assumptions 1 and 2, Definitions 1 and 2. This way, it is natural to investigate and derive a solution under the federated learning context.
(2). A federated meta-algorithm (PerMA) is proposed to solve the PerDL problem. In particular, Global Matching and Local Updates steps are designed in the federated setting. Global matching utilized a shortest path algorithm to tackle the non-convex and different initialization problems. Local updates employed a linearly-convergent algorithm.
(3). With proper assumption and mild conditions, the convergence of PerMA is proved, being a theoretical contribution to ensure the feasibility of the PerDL problem.
(4). Experiments on three settings verify the rationale of PerDL and the effectiveness of PerMA.
Weaknesses: Overall, the theory and method is good.
The experiment is a bit weak, considering only an independent strategy is adopted as the baseline.
Can it be compared with other methods, such as personalized PCA?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The comparison is a bit weak.
Is it possible to compare with other baselines in Dictionary learning or Federated learning?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Can it be compared with other methods, such as personalized PCA?/Is it possible to compare with other baselines in Dictionary learning or Federated learning?
Thank you for this helpful suggestion. We kindly refer the reviewer to our general response, where we have included numerical comparisons between our method and other existing methods in our global response. Our method indeed exhibits superior performances in all three case studies.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal! The included numerical comparisons addressed my concern.
I will keep my score as Accept.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our rebuttal is able to address your concern and thank you for your support for the paper! | Rebuttal 1:
Rebuttal: We are thankful to the reviewers for carefully reading and commenting on the strengths and weaknesses of our paper. A recurring comment among the reviewers was on the limitation of our experiments. We have thoroughly addressed this comment by conducting more experiments on our method and comparing its performance with three methods (suggested by the reviewers). The results can be found in the uploaded pdf file. We upload the code anonymously via [this link](https://anonymous.4open.science/r/PerMA-4B9E/README.md). We will integrate the results into the final version of our paper. In what follows, we briefly explain our new experiments.
### Synthetic Dataset
In this section, we compare our method with *Gkillas et al. (2022)* and *Huang et al. (2022)* under the same setting as in Section 5.1. The results can be seen in Figure 1. In the first row, clients are provided with heterogeneous datasets with similar sizes; in the second row, we consider the special cases in which one of the clients has an insufficient sample size and evaluate the performance of the dictionary learned by that specific client. The first column corresponds to final errors with varying $d$; the second column corresponds to total running times with varying $d$; the third column corresponds to final errors with varying $N$; and the fourth column corresponds to final errors with varying sparsity levels. All the results are averaged over 3 independent trials. As it can be seen in Figure 1, PerMA consistently outperforms methods proposed by *Gkillas et al. (2022)* and *Huang et al. (2022)*.
### MNIST Dataset
Thanks to the suggestions by Reviewer zsPp, we compare the quality of the reconstructed images using dictionaries learned from different methods under three metrics: MSE, PSNR and SSIM. A smaller MSE, a larger PSNR, and a larger SSIM indicate better image reconstruction quality. In Table 1, $k$ denotes the number of atoms used to reconstruct the image.
As can be seen in the table, PerMA achieves the best result in all sections except for the training time.
### Surveillance Video Dataset
In this paper, we use PerMA on the surveillance video datasets, with the goal of separating common elements shared by all clients (the background) and unique elements (different cars). Such a task cannot be accomplished by *Gkillas et al. (2022)* and *Huang et al. (2022)* due to their lack of personalization. We refer interested reviewers to our response to Reviewer zsPp for further discussion of PerMA on video surveillance. As a result, to compare our method with the state-of-the-art, we run Personalized PCA (PerPCA) introduced by *Shi and Kontar (2022)* on the same datasets. According to Figure 2, PerDL outperforms PerPCA by achieving better separation and higher resolution. We notice that Reviewer zsPp's suggestion to use F-score to quantify our result. However, F-score is mainly used as a metric to evaluate the accuracy of object detection strategy. Turning PerMA into an object detection strategy is an interesting future research direction but is not the focus of our paper.
### Reference
Gkillas, A., Ampeliotis, D., and Berberidis, K. (2022). Federated dictionary learning from non-iid data. In 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop(IVMSP), pages 1–5. IEEE.
Huang, K., Liu, X., Li, F., Yang, C., Kaynak, O., and Huang, T. (2022). A federated dictionary learning method for process monitoring with industrial applications. IEEE Transactions on Artificial Intelligence.
Shi, N. and Kontar, R. A. (2022). Personalized pca: Decoupling shared and unique features. arXiv preprint arXiv:2207.08041.
Pdf: /pdf/12fc34d92ae9ed6954196c4f2410d21e1115bdac.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
TIES-Merging: Resolving Interference When Merging Models | Accept (poster) | Summary: This paper identified two sources of performance degradation when merging fine-tuning models, (i) redundant parameter (ii) sing conflict and proposed TIES-MERGING to improve them.
Strengths: 1. The motivation to merge fine-gunning and performance is compelling.
2. Experiments were conducted with both NLP and vision, and the proposed method improved accuracy with both.
3. The proposed method is simple and computationally inexpensive, making it easy to reproduce.
Weaknesses: 1. There is no theoretical support for the proposed method.
2. In particular, the effect of sign conflict on performance is nontrivial and requires theoretical support.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Is there any theoretical support regarding sign conflict?
2. I needed help understanding how to see Fig. 6. What is "Parameters With Nonzero Values" in Fig. 6.a mean?
3. In Figure 6, why does it look like you assume that a larger mean magnitude will result in better accuracy?
In other words, why do you know that a task vector with a small magnitude does not affect accuracy but also that a task vector with a large magnitude improves accuracy?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Weakness 1, 2 and Question 1:** Theoretical support for the proposed method.
**Answer:** Our work is supported by several past works that shed light on different components of our method. We discuss each of them below and will add a similar discussion to the updated paper.
**Why Merging Works:** As mentioned in L91-102, model merging and weight interpolation work because models finetuned from the same pre-trained initialization demonstrate mode connectivity [1] which implies that the finetuned checkpoints lie in an approximately convex loss basin in parameter space [2]. The theoretical underpinnings of mode connectivity, therefore, support the merging procedure we develop.
**Existence of Sparse subnetworks:** Past works [3,4] have shown that during model finetuning, most information is encoded in a small and sparse subnetwork inside the full model. Achille et al. [5] describe two phases of training where the first phase discovers the important connections and their topology between layers and the second phase fine-tunes this relatively fixed pattern. In our work, we find that the top-k% parameters form a subnetwork that is responsible for most of the performance and only considers merging them to avoid unnecessary interference from other parameters.
**Relevance of Signs:** In addition, the Lottery Ticket Hypothesis [3] finds subnetworks (winning tickets) by magnitude pruning and shows that by re-training them with static sparsity starting from the initial weights, they reach similar or higher accuracy. However, they demonstrated that random initialization, with the same structure, does not suffice. This implies that the values of the weight are crucial for good performance. Zhou et al. [4] build upon this and show that one may not need the exact weights at initialization to train these subnetworks (lottery tickets) but the signs may be sufficient. We also find that during merging signs are critical for model performance (see the main paper, Figure 7) and hence we explicitly focus on resolving the sign conflict when merging models.
___
**Question 2:** I needed help understanding how to see Fig. 6. What is "Parameters With Nonzero Values" in Fig. 6.a mean?
**Answer:** In Figure 2, we provided the intuition of the different types of interference that exist when merging models and how it impacts the parameter values in the merged model. In Figure 6, we quantify the effect of both types of interference for real models that are being merged. As mentioned in L281-283, we bin the parameters of the model into three categories: (1) parameters that are redundant for all tasks (not in top-20% of any of the models); (2) parameters only influential for 1 model; and (3) parameters that are influential to more than one task. These categories help elucidate the impact of interference. For each category, we plot the means of the values in the merged model when using basic mean and trimming + disjoint mean. We know that zeroing out the redundant parameters does not affect the performance of the original task. However, removing redundant parameter values when merging models allows us to retain the change in that parameter's value introduced by another fine-tuned model. In contrast, taking the mean of all the values reduces the magnitude of the parameter in the merged model which might hurt the performance of a task that had a high magnitude in that direction. This is also explained in L49-59.
___
**Question 3:** In Figure 6, why does it look like you assume that a larger mean magnitude will result in better accuracy? In other words, why do you know that a task vector with a small magnitude does not affect accuracy but also that a task vector with a large magnitude improves accuracy?
**Answer:** We do not assume that a larger mean magnitude will result in better accuracy. However, in the trimmed task vector if the value of a parameter is high the parameter is influential for the task and we have observed that reducing the values of the parameters results in a degradation in performance. As noted in Figure 3, resetting the top magnitude values results in huge performance drops. Consequently, TIES merging removes all redundant parameters and only merges the influential ones (that lead to a drop in performance if reduced).
___
[1] Linear Mode Connectivity and the Lottery Ticket Hypothesis
[2] Loss surfaces, mode connectivity, and fast ensembling of dnns.
[3] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
[4] Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
[5] Critical Learning Periods in Deep Networks
---
Rebuttal Comment 1.1:
Comment: Thanks for the answer.
I'm clear on Questions 2 and 3, but I need more theoretical backing for Question 1. I appreciate the experimental contributions of this paper. Given that there's a suggestion in common question 3 to add a suitable limitation, I am happy to raise my score.
---
Reply to Comment 1.1.1:
Comment: We are glad that we were able to answer all of your questions and that you have decided to update your score. And yes, we will add in our limitations section (as mentioned in the general response) that there is limited theoretical understanding on why model merging works. Thank you again for your time. | Summary: The paper presents a novel method, TIES-MERGING, to merge models in the weight space for multitask learning. It observes an interference problem when linearly interpolating weights, and proposes a simple yet effective two-step solution: parameter trimming for small changes during fine-tuning and sign conflict resolution. The experiments in multitask shows that TIES improves performances, making it a notable (experimental) contribution to the literature of model merging.
Strengths: * The paper's main strengths lie in its simplicity, as highlighted by its clear description and illustration.
* The paper successfully builds upon the "task arithmetic" task vector approach to report a new interference phenomenon, and then enhances performance with simple yet important modifications. Model merging is an important topic in multitask, and this paper fills a crucial gap in the current literature.
* The experimental framework is robust, with a focus on significant large-scale tasks across CV and NLP domains using recent architectures and fair benchmarks.
Weaknesses: Despite its strengths, some areas require attention.
* The contributions, though valuable, are incremental, and the observed gains in multitask learning are consistent but arguably marginal, and the trimming/sign play only a marginal role in this gain.
* The experiments focus solely on models trained on different tasks. Yet, weight averaging is also useful to merge models trained on a single target task; on this model soups setup, I speculate that sign interference is less an issue, and that TIES would actually be detrimental as it would increase variance, thus limiting the benefits from combining multiple models, in particular under distribution shifts. As a minimum fix, the title should reflect this specificity, as the current one does not adequately reflect this scope limitation. A (naive) suggestion could be "Resolving Multitask Interference When Merging Models".
* Even whithin the multitask challenge, the experiments do not cover two important scenarios. First, multitask as better pretraining before fine-tuning on a target task (as in "Fusing finetuned models for better pretraining"). Second, multitask in the sequential patching scenario (as in "Patching open-vocabulary models by interpolating weight").
* Lack of analysis and clarity of the interpolating coefficient, for TIES and for the baselines. Specifically, without validation samples, could you clarify which values $\lambda$ is used: it seems that it's $\lambda=1/|num tasks|$ for weight averaging, $\lambda=0.4$ for task arithmetic (could you please point where you found this value), but $\lambda=1$ for TIES. Therefore, is the difference between task arithmetic/weight averaging in Table 1 simply due to the use of different $\lambda$? Thus (as further suggested from Table 3), scaling is an important factor, these different values of $\lambda$ hidden the true impact of your core contributions.
* Similarly, the ablation study in Table 3 could be made clearer. It is not clear whether the ablations are done one at a time or sequentially all together. If the latter is true, then why do we recover 65.5 on T5-base (when all 4 components are removed), while weight averaging performs 65.9? Moreover, what does it mean to remove "elect" while keeping "disjoint mean"? what does it mean to remove "scaling" (is it $\lambda=1/|num tasks|$ or something different)?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What is the impact of task similarity on the number of sign conflicts? Does the number of sign conflicts decrease when two models are fine-tuned on a shared task?
* Can you visualize the number of sign conflicts for different trimming ratios?
* Can TIES improves downstream transfer learning performances?
* Could you plot the curve performance while varying the coefficients given to different tasks: see Pareto curves in "Patching open-vocabulary models by interpolating weight", but also in "Pareto manifold learning: tackling multiple tasks via ensembles of single-task models", a missing yet important related work.
* Could you enrich the ablation study to further clarify/highlight the different impacts of the key contributions.
* How does ensembling of predictions behave in comparison with weight interpolation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have highlighted in Sections 7.3 and 7.4 that a key limitation is to accurately elect the sign.
Yet, the paper would benefit from a dedicated limitation section. It should ideally include that:
- weight interpolation lacks proper theoretical understanding,
- their approach is only (verified) for multitask learning,
- the averageability relies on a large pretraining and "good" hyperparams,
- TIES may be less impactful in a "sequential patching" scenario,
- they still lag behind the simultaneous multitask learning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Weakenss 1:** The contributions, though valuable are incremental.
**Answer:** We note that the improvements of TIES-Merging over Task Arithmetic are precisely due to the role of trimming values and electing signs when merging since there are no other differences between the methods. Moreover, the improvements reported in Table 1 (with validation) range from 0.7% to 3.6% over 11 tasks for IA3, 7 tasks for T5, and 8 tasks for vision experiments. Across all tasks that are averaged over, the improvement for IA3 is 27.5%, 4.9% for T5-base, 25.2% for T5-large, 14.4% for ViT-B/32, and 12% for ViT-L/14 across all tasks. Hence, we submit that these improvements are not marginal.
___
**Weakness 2 (Part 1), Question 3, and 6:** "Model soups" experiments, merging multiple checkpoints from the same task, comparison with ensembling.
**Answer:** We performed additional experiments to merge multiple checkpoints trained on the same task (as done in Model Soups) and also compared with ensembling. We use the experimental setting and code from "Merging Models with Fisher-Weighted Averaging", i.e. merging differently fine-tuned BERT models trained on RTE, MRPC, and WNLI. For each of these datasets, we select 10 fine-tuned checkpoints based on bert-base-uncased from the Hugging Face Hub and then merge them using different methods and report the results below.
||RTE|MRPC|WNLI|
|-|-|-|-|
|Averaging|59.93|78.19|56.34|
|Fisher|65.7|81.37|52.11|
|Ensemble|70.76|86.03|45.07|
|Task Arithmetic|71.84|86.03|59.15|
|TIES|72.2|86.76|58.75|
From the results presented above (also presented as Table 1 in Rebuttal PDF), we observe that TIES merging works the best in all cases except WNLI, where it only slightly underperforms Task Vectors. Notably, TIES merging provides a dramatic boost over both Fisher Merging and averaging, and **outperforms ensembling in all cases**.
___
**Weakness 2 (Part 2) and Question 1:** Sign interference is less of an issue for models trained on a single task.
**Answer:** In rebuttal PDF Figure 2, we show that sign conflicts exist even within models trained on the same task. We plotted the sign conflict between the 10 checkpoints of RTE, MRPC, and WNLI from HF (mentioned above). As the number of checkpoints increases, sign conflict increases. We also compare this with the sign interference when merging different task checkpoints and find a similar degree of interference in all of these cases. Hence, sign conflicts exist even within models trained on the same dataset. We suspect that this is because models are highly overparameterized and hence there are multiple subnetworks (subsets of parameters) that could lead to the same performance where different finetuning runs update the same parameters in different directions.
___
**Weakness 3:** multitask as better pretraining before fine-tuning on a target task.
**Answer:** Based on your suggestion, we performed additional experiments following the setting from "Fusing Finetuned Models for Better Pretraining", specifically merging the same three GLUE tasks from the previous experiment (RTE, MRPC, WNLI). We take finetuned bert-base-uncased model checkpoints for 8 GLUE tasks (wnli, sst2, rte, qnli, mrpc, cola, mnli, qqp) from HF. When finetuning on a given task, we merge all the other seven checkpoints together (apart from the chosen task) and use that as an initialization for fine-tuning on the chosen task. The results are presented in rebuttal PDF Table 2 and copied below. We find that TIES merging works well in this setting and outperforms all other merging methods by a significant margin (apart from Averaging for WNLI).
|Init Method|RTE|MRPC|WNLI|
|-|-|-|-|
|PTM Init|66.42|81.86|56.33|
|Average|75.81|86.51|56.33|
|Task Arithmetic|78.33|86.27|50.7|
|TIES|80.14|87.99|54.92|
___
**Weakness 4:** Lack of analysis and clarity of the interpolating coefficient….
**Answer:** Please refer to our general response on how the hyperparameters were selected and their effect on performance. In addition, we note that scaling is indeed an important factor for both Task Vectors and TIES merging because after merging the scale needs to be recalibrated. However, in all the experiments (apart from Table 1, three rows without validation set) for both the methods we fairly tune the $\lambda$ as a hyperparameter and use the best value. Please refer to the general response for other details.
___
**Weakness 5 and Question 5:** Similarly, the ablation study in Table 3 could be made clearer.
**Answer:** As mentioned in L311-314, the ablations are done by removing one component at a time from the full TIES merging method. Removing elect while keeping the disjoint mean refers to taking the mean of values with signs +1 and -1 but not including the 0 values of the trimmed task vectors in the mean. Removing disjoint mean but trimming and electing refers to taking the mean of the values with the elected signs and the 0 for the trimmed values. Removing scaling means using $\lambda=1$. Removing components one at a time allows us to quantify the effect of each component. If you have any other suggestions for ablations, please feel free to let us know and we will add it to the paper.
___
**Question 2:** Can you visualize the number of sign conflicts for different trimming ratios?
**Answer:** In rebuttal PDF Figure 3, we merge the 10 bert-base-uncased models finetuned on for different glue tasks (RTE, MRPC, and WNLI) and plot the sign conflict as a function of K. As we keep more and more parameters, the sign conflict increases and reaches almost 80%. This is also expected as there are many more nonzero parameters that can create conflict even if their magnitude is small.
___
**Question 4:** Plot the curve performance while varying the coefficients given to different tasks and missing important related work.
**Answer:** We will cite this in the updated version! For the curves please refer to the general response Question 2.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the clear rebuttal and the additional experiments that clarify the paper and its contributions. My main remaining concern is the small gains in Table 1 from the rebuttal wrt Task arithmetic; thus, I still think that TIES is useful in "multitask" setups, but perhaps less in a single-task/model-soups scenario. Yet, I am happy to increase my score to a 6.
---
Reply to Comment 1.1.1:
Comment: We are glad that we were able to address most of your concerns and that you have decided to update your score! However, a small note is that we are not able to see the updated score on our end yet, so it would be great if you can double-check once. Thanks! | Summary: This paper delves into the challenge of integrating multiple task-specific fine-tuned models into a singular, multitask model, without necessitating additional training. The authors identified that current methodologies overlook the interference that occurs between parameters of different models. This interference can be attributed to two primary sources: redundant parameter values and conflicts in the sign of a given parameter's values across various models. To counteract these issues, the authors introduced a novel method, TIES, which incorporates three key steps: (1) resetting parameters that underwent minimal changes during fine-tuning, (2) resolving conflicts in sign, and (3) merging only those parameters that align with the final agreed-upon sign. The proposed method demonstrated SOTA performance across different settings.
Strengths: - The paper provides a comprehensive analysis of the sources of interference in existing model merging methods, specifically pinpointing redundancies in model parameters and disagreements between parameter signs. This thorough examination underpins the motivation for the proposed method.
- The proposed method demonstrates robust performance across a wide array of conditions, including various modalities, domains, task quantities, model sizes, architectures, and fine-tuning settings. This versatility underscores the method's adaptability and broad potential for application
- The informative and insightful Section 7, which delves into the significance of different components, particularly the estimation of correct signs during the merge process, serves as a valuable resource.
Weaknesses: - The proposed method is quite heuristic and it would make the paper stronger if the authors can provide more theoretical analysis of the proposed method.
- The author did not mention the limitations of their method and potential future work. The authors could further discuss them, which will provide a more balanced view of the method and give readers an idea of the potential directions for future research.
- The paper could be improved by discussing more real-world applications of the studied problem, integrating multiple task-specific fine-tuned models into a singular, multitask model, without necessitating additional training. Why is this problem important? This would help to demonstrate the practical value of the method and its potential impact in real-world scenarios.
- It would be beneficial to include a sensitivity analysis of the hyperparameters used in the method. Although the authors provided a generic recipe of TIES with fixed hyperparameters, it would be good to see how sensitive the method is to different hyperparameters.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I don't have further questions for now and I look forward to the authors' enhancements on the weaknesses mentioned above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The author did not mention the limitations of their method and potential future work. I have listed some suggestions for improvement in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Weakness 1:** Theoretical Justification
**Answer:** Our work is supported by several past works that shed light on different components of our method. We discuss each of them below and will add a similar discussion to the updated paper.
**Why Merging Works:** As mentioned in L91-102, model merging and weight interpolation work because models finetuned from the same pre-trained initialization demonstrate mode connectivity [1] which implies that the finetuned checkpoints lie in an approximately convex loss basin in parameter space [2]. The theoretical underpinnings of mode connectivity, therefore, support the merging procedure we develop.
**Existence of Sparse subnetworks:** Past works [3,4] have shown that during model finetuning, most information is encoded in a small and sparse subnetwork inside the full model. Achille et al. [5] describe two phases of training where the first phase discovers the important connections and their topology between layers and the second phase fine-tunes this relatively fixed pattern. In our work, we find that the top-k% parameters form a subnetwork that is responsible for most of the performance and only considers merging them to avoid unnecessary interference from other parameters.
**Relevance of Signs:** In addition, the Lottery Ticket Hypothesis [3] finds subnetworks (winning tickets) by magnitude pruning and shows that by re-training them with static sparsity starting from the initial weights, they reach similar or higher accuracy. However, they demonstrated that random initialization, with the same structure, does not suffice. This implies that the values of the weight are crucial for good performance. Zhou et al. [4] build upon this and show that one may not need the exact weights at initialization to train these subnetworks (lottery tickets) but the signs may be sufficient. We also find that during merging signs are critical for model performance (see the main paper, Figure 7) and hence we explicitly focus on resolving the sign conflict when merging models.
___
**Weakness 2:** Limitation and Future Work
**Answer:** Please refer to the general response Question 3.
___
**Weakness 3:** Discussing more real-world applications of the studied problem. Why is this problem important?
**Answer:** As mentioned in L24-28, there are thousands of FT checkpoints released on Huggingface Hub every day and model merging allows us to reuse these checkpoints to create other models with desired properties: (1) Merging provides a cheap way to create and obtain models with specific abilities by merging multiple models. (2) Merging can also be useful in settings where the user cannot explicitly share the data, hence the user cannot share the data but share the model weights which can then be combined from multiple users to create a better model. (3) A process of merging multiple models together also allows for the collaborative development of open-source models where models are created and patched with specific abilities.
___
**Weakness 4:** Sensitivity analysis of the hyperparameters used in the method
**Answer:** Please refer to our general response and Figure 1 in the Rebuttal PDF.
___
[1] Linear Mode Connectivity and the Lottery Ticket Hypothesis
[2] Loss surfaces, mode connectivity, and fast ensembling of dnns.
[3] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
[4] Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
[5] Critical Learning Periods in Deep Networks
---
Rebuttal Comment 1.1:
Comment: I'm grateful for the authors' thorough responses and their initiative in addressing the highlighted concerns. I will retain my score. | Summary: Merge mutiple **fine-tuned** neural networks, each from an unique task (dataset), into one neural network by weight averaging. Denate the weights of pretrained network as $\theta_0$, fine-tuned network as $\theta_{\tau}$, the weight updating direction $\tau = \theta_{\tau} - \theta_0$
The paper introduces two tricks 1) for each $\tau$ only keep the top $k$% elements in $\tau$ according to their magnitude. (keep the others as zero), 2) during the weight averaging, only retain the dominate direction (+1 or -1) for each elements.
The experiments shows the improvements of the proposed Ties-Merging method over other weight averaging methods, such as naive weight averaging, task arithmetic, and regmean. But Ties-Merging is still significantly lagging behind MultiTask training.
Strengths: - The writing is clear and easy to read.
- The introduced two tricks are resonable and easy to understand.
- Comparing with naive weight averaging, task arithmetic, Fisher averaging, and regmean, the proposed ties-merging performs very well.
Weaknesses: - Even though the proposed Weight Averaging method out-perform other weight averaging methods, it still legs behind MultiTask training.
Furthermore, considering both the training cost and inference time, it is hard to see the benefit of the proposed method over mutitask training. As to other baseline methods, such as naive weight averaging, Fisher averaging and regmean, they are designed to solve IID (one fine-tuning task instead of mutiple different tasks) not OOD. These two facts reduce the strongness of the experiments. (But it is fine. Because the two introduced tricks are good enough).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Simple averaging is $1/n \sum_{\tau} \theta_{\tau} = \theta_0 + 1/n \sum_{\tau}\tau$
- Task Arithmetic is $ \theta_0 + \lambda \sum_{\tau}\tau$
What is the parameter $\lambda$ of task arithmetic in Figure 5? If it is $0.4$ as line 210, why there is a big gap between simple averaging and task arithmetic in figure 5, ( given that $1/2=0.5, 1/3=0.33$)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
**Weakness 1 (Part 1):** Method still legs behind MultiTask training.
**Answer:** While we agree that the gap between TIES merging and multitask training is an important limitation, we note that multitask learning requires simultaneous training on all the training data at once. In contrast, when merging models, we only require access to individual-task models and do not require additional training. These individual task models can be trained asynchronously and recycled to create many different merged models, and are frequently shared e.g. on the Hugging Face Model Hub (where hundreds of thousands of fine-tuned models are available). Our methodological contribution identifies an important aspect of interference when merging task-specific models and shows notable improvements over existing methods.
___
**Weakness 1 (Part 2):** The training cost and inference time make it hard to see the benefit of the proposed method over multitask training.
**Answer:** We would like to clarify that our method does not require any additional training. In a real-world setting, most of these models would be trained by different people for their use cases and then uploaded to a common hub for public use. Moreover, the inference cost of the merged model is the same as that of the original model. Hence, there is no additional inference cost either.
___
**Weakness 1 (Part 3):** Other baseline methods, such as naive weight averaging, Fisher averaging, and regmean, are designed to solve IID (one fine-tuning task instead of multiple different tasks), not OOD.
**Answer:** RegMean was specifically designed for merging different tasks via their objective, and the RegMean paper includes experiments of merging different tasks in section 5.1. Similarly, weight averaging is also used to merge different tasks for better initialization [1]. Moreover, in PDF Table 1, we have provided additional experiments where we show that TIES merging outperforms other methods even when merging multiple checkpoints from the same task. Hence, even in the "IID" setting TIES outperforms other methods. For more details please refer to our response to reviewer ioFQ.
___
**Question 1:** What is the of the parameter \lambda of task arithmetic in Figure 5? If it is 0.4 as line 210 …
**Answer:** The value of $\lambda=0.4$ is only used for the experiments without a validation set in Table 1 (three rows); in all other experiments the value of $\lambda$ is tuned on a validation set. In Figure 5, for each subset of tasks that are merged, we select the best $\lambda$ on the validation set. Hence, it is not expected for averaging and Task Arithmetic to be close to each other and we see a difference in performances.
[1] Fusing finetuned models for better pretraining, Choshen et. al. 2022
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the answer.
**Rebuttal 1 & rebuttal 2**: individual-task models do not require additional training, since there are a lot shared models, e.g. Hugging Face.
That is problematic in practice. These shared model may not be averageable, due to fine-tuning stragegies. To make two fine-tuned model be averageable, certain mild conditions need to be satisfied in practice, such as small learning rate, mild regularization, etc. So individual-task models still require additional trainings with the proposed method.
**Rebuttal 4**: the parameter $\lambda$.
Could you provide the exact number of $\lambda$ of different "Number of Tasks" in Figure 5? So that readers and compare the number with "simple averaging" (green line).
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional comments. Please find our responses below.
**Rebuttal 1 & 2 Answers:** The ensembling/model soup experiments, included in our rebuttal merges random checkpoints taken from the huggingface hub that can potentially have different training schedules and unknown hyper-parameters. Similar to, section 3.1 of Fisher Merging [1], section 4.2, and Appendix D.6 of Task Arithmetic [2], we observe that merging models trained on checkpoints trained by the community satisfies the underlying merging criteria in most cases and leads to good performance. However, if there are cases where merging some checkpoint degrades the performance, we can use the Greedy Soup recipe from the Model Soups [3] to eliminate such problematic checkpoints during merging. Moreover, we agree that additional training might lead to even better-merged models. But were able to successfully merge and use models from the HF hub without additional training and this observation on merging checkpoints is in line with many successful past works like Model soups [3], Task Arithmetic [2], and Fisher Merging [1].
___
**Rebuttal 4 Answers:** As mentioned in L265-267, For Figure 5, for each number of tasks we take at most 10 random subsets of the 8 tasks we were considering. The solid line is the avg of the merged performance of these different runs. Below we provide the optimal lambdas for the different subsets of tasks we merged for both TIES and Task Arithmetic, note that for averaging its always 1/(# tasks). Each entry in the list is the optimal lambda for a particular subset of tasks.
TIES (2 tasks) -> [1.7, 1.9, 2, 2, 1.1, 1.5, 1.6, 1.8, 1.9, 1., 5]
Task Arithmetic (2 tasks) -> [1, 0.9, 1, 1, 0.9, 1, 0.9, 0.9, 0.9, 1]
TIES (3 tasks) -> [1.2, 2, 1.5, 1.9, 1.8, 1.7, 1.4, 2, 3, 1.9]
Task Arithmetic (3 tasks) -> [1, 0.7, 0.7, 1, 1, 0.9, 0.7, 0.7, 0.9, 1]
TIES (4 tasks) -> [1.5, 1.3, 1.3, 1.8, 2.3, 1.7, 1.8, 1.7, 1.9, 1.5]
Task Arithmetic (4 tasks) -> [0.8, 0.7, 0.7, 0.7, 0.6, 0.7, 0.7, 0.8, 0.6, 0.7]
TIES (5 tasks) -> [2, 2, 2, 1.8, 1.7, 2, 1.6, 2.1, 1.6, 1.3]
Task Arithmetic (5 tasks) -> [0.7, 0.8, 0.6, 0.8, 0.7, 0.6, 0.6, 0.6, 0.6, 0.7]
TIES (6 tasks) -> [1.6, 1.7, 1.7, 1.2, 1.7, 1.7, 1.5]
Task Arithmetic (6 tasks) -> [0.6, 0.5, 0.5, 0.5, 0.7, 0.5, 0.6]
TIES (7 tasks) -> [1.7]
Task Arithmetic (7 tasks) -> [0.5]
___
[1] Merging Models with Fisher-Weighted Averaging.
[2] Editing Models With Task Arithmetic.
[3] Modelsoups: Averaging Weights of Multiple Fine-Tuned Models Improves Accuracy Without Increasing Inference Time. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and for providing constructive comments for enhancing the paper. We appreciate that reviewers recognized:
- That our paper fills a crucial gap in the current literature (ioFQ).
- Our notable experimental contribution (reviewer ioFQ) via robust, comprehensive, and fair experimental setup (reviewer ioFQ, b9mw, Rfna).
- Simplicity (reviewer mRV1, ioFQ, Rfna, rBCc), clear description (reviewer ioFQ, Rfna), and computational efficiency (reviewer mRV1).
___
In the responses, we include the following additional experiments showing,
1. ("Model soups" experiments) TIES outperform other methods when merging checkpoints trained on the same task (Rebuttal PDF Table 1).
2. ("Fusing for better Finetuning" experiments) TIES provides a better initialization compared to other methods for fine-tuning (Rebuttal PDF Table 2).
3. Effect of hyperparameters on the method’s performance (Rebuttal PDF Figure 1).
4. A significant amount of sign conflicts exist even when merging different checkpoints of the same tasks (Rebuttal PDF Figure 2).
5. Sign conflict increases as we vary the top-k threshold (Rebuttal PDF Figure 3).
___
### **Common Questions.**
**Question 1:** Clarification on the chosen values of lambda for different methods.
**Answer:** As a first point of clarification, in all of our experiments apart from a subset of rows in Table1, $\lambda$ is a hyperparameter and we tune for it over the validation set. All the hyperparameters were chosen in this way for all the methods, including TIES merging and all baselines.
When there is no validation set available, it is not possible to tune hyperparameters in this way, so we use a fixed value of $\lambda = 1$ for TIES (L654-659 and L205-206), and $\lambda=0.4$ for task arithmetic (L210). Weight averaging does not consider $\lambda$ as a hyperparameter and always uses 1/(# tasks) with or without a validation set (L185-186). In Appendix D3 of the task arithmetic paper, they suggested that $\lambda$ values between 0.3-0.5 work best for most cases and we therefore used 0.4 (the midpoint) in our experiments without a validation set.
TIES merging takes the disjoint mean of the task vectors which already normalizes the values by the number of models with non-zero values for a parameter. Hence, we use $\lambda = 1$ for TIES. In contrast, Task Arithmetic adds all the task vectors ($\theta_m = \theta_{init} + \lambda * \sum \tau_i$), and hence a $\lambda$ value of 1/(# tasks) for task arithmetic is in spirit similar to using $\lambda = 1$ for TIES.
___
**Question 2:** Effect of hyper-parameters \lambda and k on the performance.
**Answer:** In Rebuttal PDF, Figures 1 (left and middle), we plot the effect of $\lambda$ on the performance when merging T5-base and T5-large models trained on GLUE (Similar to Table-1). For TIES merging, we vary $\lambda$ around the value 1 because TIES takes the mean of task vectors, whereas task arithmetic adds up the task vectors. Hence, a $\lambda$ value of 1 for TIES is similar to using $\lambda$ = 1 / (# tasks) for Task Arithmetic. The range of 0.8-1.8 for TIES was selected based on preliminary experiments on the PEFT setting (as mentioned in L654-659 and L205-206). We find that TIES-Merging is much less sensitive to changes in $\lambda$ (with an accuracy range of 68-75 across the considered values of \lambda) compared to Task Arithmetic (with an accuracy range of 55-75). For the effect of k ( Rebuttal PDF Figure 1, right), we increment the value of k in steps of 10 and skip k=0 as that would select no parameters. We observe that as K increases the performance drops and then saturates. However, we would like to note that this curve might change based on the distribution of the values in the task vector.
___
**Question 3:** Limitation and future works.
**Answer:** We will add the following discussion to the main paper in the revised version.
As pointed out by reviewer ioFQ, our works share the same general limitations of existing merging methods, like (1) a limited theoretical understanding of why and when weight interpolation works, what are the important underlying factors, and its proper connections with mode connectivity; (2) that merging relies on common initialization and model architecture; and (3) merging individual-task models to create a multitask still lags behind the simultaneous multitask training. Moreover, it is not clear how to select the checkpoints for merging in order to create multitask models useful for specific domains. In addition, while our method provides a way to choose signs when merging task vectors, we still find that using the signs from a multitask model performs better. Some potential future works include figuring out a good way to estimate multitask signs without having access to the multitask model as this has the potential to bridge the gap between multitask merging and multitask training (as demonstrated in Section 7.4).
Pdf: /pdf/2e6ccc8d653f07ffaaae7e271277897e85e07f4b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes to resolve the interference of model merging, a solution to combine multiple task-specific models into a single multitask model. It demonstrates two major sources of interference, including redundant parameter values and sign conflict and proposes solutions to resolve the interference.
Strengths: The paper demonstrates some interesting research insights and the solutions are simple and clear.
Weaknesses: 1: I question the necessity of Section 7.3. According to the network pruning literature, a high magnitude is always an implication of importance/sensitivity, and a widely used pruning metric. Flipping the sign of or pruning away the Top-k% parameters can cause significant performance drop is well-known to the literature. It seems to have weak connection with this paper’s main argument.
2: I did not find the discussions regarding limitations and future work.
3: More analysis on the experimental results are required. For example, what are the effects of the proposed on each task respectively? Why do some tasks perform better than others?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Refer to questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do NOT adequately address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness 1:** Necessity of Section 7.3 and pruning away the Top-k% parameters can cause a significant performance drop is well-known in the literature.
**Answer:** As mentioned in L166-168, we would like to clarify that we are pruning the task vectors (i.e. the difference between the fine-tuned model and the pre-trained model), not the model's parameters. Prior work in the pruning literature mostly focused on pruning model parameters. Furthermore, we provide experiments and analysis to provide more intuition and motivation behind pruning task vectors. For example, Figure 3 shows that deleting the bottom 80% of the values from the task vector doesn’t affect performance while Figure 7 shows that the signs of the remaining top 20% of the parameters in the task vector are very crucial.
___
**Weakness 2:** Limitations and future works
**Answer:** Please refer to our general response Question 3.
___
**Weakness 3:** What are the effects of the proposed on each task respectively? Why do some tasks perform better than others?
**Answer:** We provide per-task results in Appendix Tables 6-12. For the effect of merging on individual tasks, we hypothesize that the improvements over other methods are in cases where there is significant interference between tasks. However, when merging multiple tasks it is harder to predict the effect on individual tasks and would be a good research question for future study. | null | null | null | null | null | null |
Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case | Accept (poster) | Summary: In the vein of the seminal work by Sakaue and Oki for L-convex functions, this paper proposes a new method to accelerate M-convex function minimization using past predictions, a technique known as warm-start.
The contributions are as follows:
1) Present a framework to accelerate M-convex function minimization with past predictions
2) Applying this framework to the Laminar, Box and Nested classes of problems and improving time bounds using warm start for each of these problems
Strengths: The results in the experiments section seems promising. Although I'm not an expert in discrete optimization, the problems tackled seemed of great importance.
Weaknesses: 1) This paper is very theoretically dense and tries to tackle a lot of problems at once, the contributions are not clear. I think this paper would gain at focusing solely on the Laminar problem and possibly present the extension of your work in appendix. The demonstration would be clearer, and your paper more straight to the point.
2) Although rigorously written, it feels like this paper is a re-adaptation of the work from Sakaue and Oki to the class of M-convex functions and is thus not particularly original.
3) The results section is rather light compared to the extent of the theoretical claims on the Laminar, Box and Nested subproblems. I'd have preferred for each of the 3 classes of problems you mention, a first experiment on synthetic data with comparison between warm start and no warm-start. Then, a second experiment on real-world data to prove the superiority of your method.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - In figure 3, the y-axis label is set to "Iterations". What is the meaning of these iterations? Shouldn't this be the suboptimality gap?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are addressed.
No negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's valuable comments. We are glad that the reviewer found our experimental results promising. Below, we respond to each comment.
> **Weakness 1.** This paper is very theoretically dense and tries to tackle a lot of problems at once, the contributions are not clear. I think this paper would gain at focusing solely on the Laminar problem and possibly present the extension of your work in appendix. The demonstration would be clearer, and your paper more straight to the point.
Firstly, we would like to clarify that our contribution is two-fold.
1. We present a general warm-start-with-prediction framework for M-convex minimization (as described in Theorem 3.1), and
2. based on Theorem 3.1, we obtain specific time complexity bounds shown in Table 1 for M-convex-minimization subclasses: Laminar, Nested, and Box.
It should be noted that all our results can be interpreted in a unified manner as warm-starts with predictions for M-convex minimization. To demonstrate the versatility of this unified perspective, we have described how we can deal with those problems.
As the reviewer noted, the result for Laminar is technically the most significant (with the result for Nested following as a special case of Laminar). However, as outlined in Section 1.1, our result for Box is also crucial as it is the first demonstration in the literature [11, 7, 39, 35, 10] that the theoretical worst-case lower bound [19] can be surpassed using predictions.
We would appreciate it if the reviewer could understand the unified perspective and the significance of each result.
> **Weakness 2.** Although rigorously written, it feels like this paper is a re-adaptation of the work from Sakaue and Oki to the class of M-convex functions and is thus not particularly original.
Our work is not a re-adaptation of (Sakaue and Oki, 2022). Although L-convex minimization, addressed by Sakaue and Oki (2022), and M-convex minimization are nominally similar, they represent completely different classes of problems (L-convex encompasses bipartite matching, min-cost flow, etc., while M-convex includes Laminar, Nested, Box, etc.), hence named and studied separately in discrete convex analysis [31]. Consequently, our algorithms and theoretical analyses differ significantly from (Sakaue and Oki, 2022). Please refer to our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for further details.
> **Weakness 3.** The results section is rather light compared to the extent of the theoretical claims on the Laminar, Box and Nested subproblems. I'd have preferred for each of the 3 classes of problems you mention, a first experiment on synthetic data with comparison between warm start and no warm-start. Then, a second experiment on real-world data to prove the superiority of your method.
We appreciate your suggestion. We have conducted additional experiments on realistic settings provided in [47]. Please refer to the [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for details. As with the experimental results in Section 5, under realistic settings (referred to as "CRASH" and "FUEL") with moderate noises, our method, Learn, outperformed the continuous-relaxation-based method, Relax. We hope the additional experimental results effectively address the reviewer's concern about the shortage of experiments.
To clarify, as explained in lines 222--224, the Nested and Box problems are special cases of the Laminar problem. Thus, our experimental results for the Laminar problem could imply similar results for Nested and Box. (Indeed, the additional experimental results with Nested instances were similar to the original ones with Laminar instances.)
> **Question.** In figure 3, the y-axis label is set to "Iterations". What is the meaning of these iterations? Shouldn't this be the suboptimality gap?
By "Iterations," we refer to the number of iterations of the greedy algorithm (Algorithm 1) taken to solve each $t$th instance, as noted in the figure caption. Thus, the experimental result should read as follows: as the x-axis increases (indicating an increase in the number of observed past instances), the prediction tends to become more accurate (as implied in Proposition 3.2), reducing the number of iterations of the greedy algorithm required to solve a new instance (i.e., the y-axis). This experimental result validates our $\\|x^* - \hat x\\|_1$-dependent time complexity bounds, i.e., the time complexity decreases as we learn a prediction $\hat x$ from more past instances in order to approximate an optimal solution $x^*$ more accurately in expectation.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their comprehensive answer that addressed the points made in my initial review. Although I'm not fully convinced by the novelty of this approach, I raised my evaluation from 3 to 5.
---
Reply to Comment 1.1.1:
Title: Gratitude for Your Feedback and Re-evaluation
Comment: We sincerely appreciate your feedback and the effort you devoted to re-evaluating our work. We are genuinely grateful for the improved score.
We acknowledge that the novelty of our work, especially when juxtaposed with (Sakaue and Oki, 2022), might appear subtle and perhaps challenging to grasp. That being said, our algorithms and analyses indeed differ from the existing ones. They are pivotal in obtaining our time complexity bounds, which serve as significant demonstrations of the power of predictions for improving algorithms' performance beyond the worst-case limitations.
We once again deeply appreciate your dedicated engagement with our work. Our gratitude cannot be overstated. Should there be any lingering questions or concerns, please do not hesitate to share them with us. | Summary: This paper applies the learning-augmented algorithms framework to a class of discrete optimization problems called M-convex. A function defined on an integer grid is M-convex if for every x, y it holds that f(x)+f(y) >= f(x-ei+ej)+f(y+ei-ej) for some base vectors ei, ej. The paper complements the line of research on improving running time of static algorithms using predictions, started by Dinitz et al. (NeurIPS’21), and is a natural followup to the Sakaue&Oki’s paper (NeurIPS’22) on another class, called L-convex.
The authors propose a general framework for solving M-convex optimization problems given a predicted optimal solution that is supposed to be close to the true one. Then, they give algorithms for specific subclasses of these problems, called Laminar, Nested, and Box (after the form of constraints allowed in each class). The running times they obtain are O(n*eta) for Laminar and Nested and O(n+log(n)*eta) for Box, where eta = ||x_opt - x_pred||_1 is the L1-error of the prediction.
Strengths: There are nice algorithms in Sections 4.1 and 4.2 (for rounding to the nearest feasible solution and for finding the steepest descent in the laminar case).
With (very) accurate predictions, the algorithm for the special case of box constraints can go (a tiny bit) below a known lower bound for classic (prediction-less) algorithms, and it seems to be the first such example in the literature about learning-augmented static algorithms.
The paper is written with care and not hard to follow.
Weaknesses: The algorithm/framework for general M-convex optimization is technically trivial. The algorithms for the laminar subclass are much more interesting, but I’m not convinced that this subclass is interesting enough on its own for the results to have substantial implications.
Interesting technical ideas seem very much tailored to the specific setup, so it might be difficult for this paper to have a larger impact or inspire further research.
The experiments are very simplistic – on a toy problem with synthetic data and predictions. (Still it was somewhat surprising to learn that, for a concentrated enough distribution of instances, an integral solution learned from previous instances can be a better initialization than an optimal solution to the fractional relaxation.) If I consider this paper to be a pure theory paper (which for now I do), then such experiments are not an issue, but in that case the theoretical contributions of the paper might not be strong enough to pass the ICML bar.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Could you find some real-world applications of laminar convex optimization that have existing benchmarks/datasets, and run experiments on them? (Even with some synthetic method for generating predictions.)
Statement in lines 59–60 seems not accurate. For instance, the negative-weight SSSP algorithm of [7] runs in O(m) time for very good predictions, beating the best known worst-case bound by a couple of log factors.
I do not agree with the statement in line 390 that you can beat best known algorithms “(...) given accurate predictions, which we can provably learn from past data.” The learnability proofs tell you that you can efficiently generate predictions as good as possible in a given context, but not that there exist predictions with small enough error. (This is a common limitation of PAC-learning framework, so I’m not complaining about your result, just about the way in which you describe it.)
Minor comments:
Line 26: At first it was totally unclear to me what “conjugate” was supposed to mean in this context.
Line 28: “Specifically, we focus on an important subclass (...)” – this sentence is misleading as it suggests that you only focus on the subclass, which is not the case, because you address the general problem as well.
Line 30: “(...) is widely studied (...)” – could you please provide some references?
Line 42: Please mention that the greedy algorithm is given later as Algorithm 1.
Line 101: “closet” -> “closest”
Line 223: For readability, I’d consider using a colon instead of \mid because \mid blends with the set cardinality symbol.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: Treating it as a theory paper, it does not have any important limitations (apart from the possibly limited impact itself). The title might be slightly misleading, since the most interesting results are only for a relatively narrow special case (laminar) and not the general case mentioned in the title (L-convex).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's detailed and constructive comments. We are pleased that the reviewer recognizes the technical strengths of the algorithms in Sections 4.1 and 4.2 and the significance of our result for Box, a first demonstration of the potential to surpass the lower-bound result using predictions. Below are our responses to the review comments.
### On weaknesses
> I'm not convinced that this subclass is interesting enough on its own for the results to have substantial implications.
Although the subclasses (the Laminar, Nested, and Box problems) might not be as widely recognized in the machine learning (ML) community as the ones addressed in the literature (Dinitz et al. [11]; Chen et al. [7]; Davis et al. [10]; Polak and Zub [35]), those problems are extensively studied in operations research (OR) [18, 19, 20, 22, 30, 41, 42, 43, 46, 47] for their importance in the industrial domain. We believe our work can deliver a substantial impact by catalyzing cross-disciplinary future research between ML and OR.
> Interesting technical ideas seem very much tailored to the specific setup, so it might be difficult for this paper to have a larger impact or inspire further research.
Although the individual algorithms in Section 4 may appear specific, they stem from our general framework M-convex minimization in Section 3. This general framework tells us that efficient $\ell_1$-projection and computation of a steepest descent direction of the form $-e_i + e_j$ are crucial for using predictions to accelerate algorithms for the broad class of M-convex minimization. This general implication could benefit future research by providing a clear direction for enhancing M-convex minimization algorithms with predictions.
### On questions
> Could you find some real-world applications of laminar convex optimization that have existing benchmarks/datasets, and run experiments on them? (Even with some synthetic method for generating predictions.)
In response to the question, we conducted additional experiments using Nested instances from [47]. Please refer to our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for details. (Unfortunately, we could not find datasets for Laminar instances due to the scarcity of public datasets in this domain.) Under realistic settings (referred to as "CRASH" and "FUEL") with moderate noises, our method, Learn, outperformed the continuous-relaxation-based method, Relax, as with the results in Section 5. We believe these additional experiments strengthen the practical side of our work and hope the reviewer will appreciate them.
> Statement in lines 59–60 seems not accurate. For instance, the negative-weight SSSP algorithm of [7] runs in O(m) time for very good predictions, beating the best known worst-case bound by a couple of log factors.
We thank the reviewer for pointing this out. We will revise lines 59--60 to clarify that Chen et al. [7] 's SSSP algorithm with predictions can surpass the best worst-case bound.
We would like to make two remarks to ensure that all reviewers have a correct understanding. (i) We are still the first to demonstrate the potential to surpass the *lower-bound* result [19] using predictions. (ii) Given accurate predictions, our algorithm for Laminar can achieve an $\mathrm{O}(n)$-factor improvement over the best-known algorithm [18, 34], in contrast to Chen et al. [7] 's logarithmic improvement upon the best algorithm.
> I do not agree with the statement in line 390 that you can beat best known algorithms "(...) given accurate predictions, which we can provably learn from past data." The learnability proofs tell you that you can efficiently generate predictions as good as possible in a given context, but not that there exist predictions with small enough error. (This is a common limitation of PAC-learning framework, so I'm not complaining about your result, just about the way in which you describe it.)
We appreciate this comment. As the reviewer mentioned, there is an inevitable limitation of agnostic PAC learning, i.e., even optimal predictions may not achieve small errors depending on the situation. We will clearly state this general limitation of the agnostic PAC learning framework in our revision.
We also appreciate the minor comments and will revise our title to better reflect our primary findings, as suggested.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Given the above explanations about importance of M-convex optimization in OR literature (and the comments of reviewer KCcb, who seem to be more knowledgeable in the area), as well as the additional experiments, I increase my score to 6.
---
Reply to Comment 1.1.1:
Title: Gratitude for Your Feedback and Re-evaluation
Comment: We deeply appreciate your meticulous attention to our paper, both in your initial detailed feedback and your subsequent review of our rebuttal. We are genuinely grateful for the revised score and heartened to know that the importance of our work in the OR literature has been recognized. We truly value your contributions to the review process. Should there be any lingering questions or concerns, we warmly invite you to share them with us. | Summary: The goal of the work is to minimize M-convex functions with prediction. The work mainly focuses on a subclass of M-convex functions that use Laminar, Nested or Box constraints.
Strengths: Minimizing M-convex functions is an important class of discrete optimization problems that has wide variety of applications. Using algorithms with prediction is an important area of research that can typically solve many problems in discrete convex analysis. Especially due to wide variety of application.
Soundness. The work is technically sound that uses relevant algorithms produced in literature to solve a very important problem.
Presentation. The paper is very well written that flows flawlessly with very minor concerns.
Contribution. The contribution is first of its kind to the class of problems that it is applied it. However, it must be noted that most of the major algorithms used in the paper are proposed in literature. The main contribution is modification of these classical algorithms to solve the bigger problem of minimizing M-convex functions with predictions.
Weaknesses: I dont find many weaknesses in this work.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have two major concerns with the work
1. In section 4.2, it is clear that finding the node $j$ for a fixed node $i$ can be done in $O(n)$ time. However, I think it is not $O(n)$ as the Dynamic programming algorithm iterates over $\mathcal{C}(Y)$ and this is not a constant which depends on the type of laminar function. On the other hand, such a tree is much simpler for a nested case or box case.
2. For the sake of clarity, I request the authors to also explain the learning predictions part even though it is the same as used in other seminal works at least in the appendix. The paper is not self contained otherwise.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: This work would have been more complete if more efficient algorithms could be proposed for submodular constraints. This is the only reason for me to not give strong accept. Although laminar, nested and box are interesting, I believe, they have limited application scope.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's careful reading and constructive feedback. We are pleased that the reviewer understood our main contribution and appreciated the importance of our results. Below are our responses to the comments.
> **Question 1.** In section 4.2, it is clear that finding the node for a fixed node can be done in $O(n)$ time. However, I think it is not $O(n)$ as the Dynamic programming algorithm iterates over $\mathcal{C}(Y)$ and this is not a constant which depends on the type of laminar function. On the other hand, such a tree is much simpler for a nested case or box case.
We assume, for simplicity, that the tree $T_\mathcal{F}$ is binary, i.e., $|\mathcal{C}(Y)| \le 2$, as stated in lines 231--233. Thus, the dynamic programming algorithm terminates in $\mathrm{O}(n)$ time. It should be noted that we can always preprocess $T_\mathcal{F}$ to make it binary by adding at most $n$ dummy nodes without worsening the asymptotic time complexity. This treatment is indeed only for simplifying the analysis. Even if $T_\mathcal{F}= (\mathcal{V}, E)$ is non-binary, the sum of $|\mathcal{C}(Y)|$ over all nodes $Y \in \mathcal{V}$ is $n - 1$ since every non-root node has exactly one parent. Thus, the total time complexity of the dynamic programming algorithm is $\mathrm{O}(\sum_{Y \in \mathcal{V}} |\mathcal{C}(Y)|) = \mathrm{O}(n)$ anyway.
> **Question 2.** For the sake of clarity, I request the authors to also explain the learning predictions part even though it is the same as used in other seminal works at least in the appendix. The paper is not self contained otherwise.
We appreciate the reviewer's helpful suggestion. We agree with the need to elaborate further on the learning of predictions for completeness, although the procedure is the same as (Khodak et al., 2022). We will include a more detailed explanation in our revision.
> **Limitation.** This work would have been more complete if more efficient algorithms could be proposed for submodular constraints. This is the only reason for me to not give strong accept. Although laminar, nested and box are interesting, I believe, they have limited application scope.
We appreciate this comment. Certainly, developing more efficient algorithms for general submodular constraints is of great significance; this is an avenue for future work. Below is our perspective on this challenge. In the general M-convex case, we obtain an initial feasible solution by projecting a prediction onto the base polyhedron of a submodular function. This process contains the membership testing for the base polyhedron as a special case, which is considered to be as hard as submodular minimization due to the equivalence of minimization and separation. Therefore, designing a projection algorithm that is much faster than submodular minimization (or $\mathrm{O}(\textsf{SFM})$-time) for general M-convex minimization would be challenging.
On the other hand, our $\mathrm{O}(n\textsf{SFM})$-time projection described in Section 3.1 and Appendix A.1 is helpful in that it ensures the existence of a polynomial-time projection algorithm for general M-convex minimization. That is, when we consider applying our warm-start-with-prediction framework to new M-convex minimization subclasses, we know from our general $\mathrm{O}(n\textsf{SFM})$-time bound that the projection can be done in polynomial time. This information effectively focuses our attention on designing more efficient projection algorithms using problem-specific structures.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed feedback. I have gone through the other reviews too and the effort the authors put to answer all the questions is commendable. I intend to stick to my rating that I provided earlier.
---
Reply to Comment 1.1.1:
Title: Gratitude for Your Feedback
Comment: We sincerely appreciate the reviewer's thoughtful feedback and careful examination of other reviews. Your constructive engagement in the review process has truly encouraged us. We are delighted with your agreement towards acceptance. | Summary: Extending the warm-starting techniques in L-convex function minimization by Sakaue and Oki (2022), the authors study the problem of acclerating M-convex function minimization with predictions. The idea is to start from a (possibly infeasible) predicted solution, project the rounded solution to the feasible region and then apply the standard M-convex function minimization greedy algorithm. In particular, the authors show that when applied to Laminar convex minimization, a special case of M-convex function minimization, their framework can achieve better time complexity than current worst-case time complexity provided that the prediction is accurate enough. Experiments on the staff-assignment problem confirm that the proposed framework can help reduce the number of iterations of the greedy algorithm.
Strengths: The paper studies an interesting problem, and is written well in terms of structure, clarity in explanation and technical presentation. Theoretical results are justified by empirical experiments.
Weaknesses: 1. The framework proposed in this paper for M-convex optimization is somewhat similar to the one in Sakaue and Oki (2022) for L-convex optimization. So novelty in the general framework is limited.
2. The authors provide comparision between their results and worst-case time complexity for laminar convex minimization. However, it seems like their complexity outperforms the worst-case only under the assumption of highly accurate predictions (small enough $\ell_1$ norm prediction error), which does not necessarily hold in practice.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The staff assignment problem does not seem to be a very realistic test problem to me (with 12800 staff members and 128 tasks). Are there more standard test problems? How does the proposed framework perform on them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful to the reviewer for providing valuable comments. Below we respond to each comment.
> **Weaknesses 1.** The framework proposed in this paper for M-convex optimization is somewhat similar to the one in Sakaue and Oki (2022) for L-convex optimization. So novelty in the general framework is limited.
While our framework for M-convex minimization, at a high level, resembles that for L-convex minimization in (Sakaue and Oki, 2022), these two classes are entirely different, as detailed in our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD). In particular, the definitions of convexity and corresponding steepest descent directions completely differ between L- and M-convex cases. Thus, new efficient methods for projection and computation of steepest directions are required, as discussed in Section 4. We hope the reviewer understands that our paper has sufficient novelty, despite the apparent similarity of the general frameworks.
> **Weaknesses 2.** The authors provide comparision between their results and worst-case time complexity for laminar convex minimization. However, it seems like their complexity outperforms the worst-case only under the assumption of highly accurate predictions (small enough $\ell_1$ norm prediction error), which does not necessarily hold in practice.
Assuming accurate predictions in order to surpass the worst-case limitations is standard in the literature [11, 7, 39, 35, 10]; similar ideas are also common in the *beyond-the-worst-case* paradigm [38]. Researchers in these fields have theoretical interests in doing better than the worst-case limitations on algorithm performance by leveraging past data. Therefore, even if accurate predictions are demanded, our theoretical time complexity bounds, which potentially surpass the best worst-case results [14, 18, 19, 34, 46] and a lower-bound result [19], are of great significance. It should also be noted that our $\mathrm{O}(n\\| x^* - \hat x\\|_1)$-time bound for the Laminar problem outperforms the best known bound of $\mathrm{O}(n^2\log n \log (mR/n))$ [18, 34] if $\\| x^* - \hat x\\|_1 = \mathrm{O}(n)$, i.e., every element is allowed to have a constant error on average. Surpassing the best worst-case results with such a mild assumption on prediction accuracy is rare in this context.
We also remark that if accurate predictions do not exist, surpassing the worst-case limitations is inevitably difficult. That is, if past instances exhibit no tendency, we cannot utilize them to improve algorithm performance. Considering this, as in Proposition 3.2, we aim to learn predictions that perform best on the underlying distribution of instances, hoping that such best predictions incur small errors in practice. This idea is customary in general *agnostic PAC learning*, as also mentioned by Reviewer [SFP3](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=ci542rY7dc). We will clearly state this point in our revision.
> **Question.** The staff assignment problem does not seem to be a very realistic test problem to me (with 12800 staff members and 128 tasks). Are there more standard test problems? How does the proposed framework perform on them?
In response to the reviewer's question, we conducted additional experiments using Nested instances from [47]. Please refer to our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for details. As with the results in Section 5, under realistic settings (referred to as "CRASH" and "FUEL"), our method, Learn, outperformed the continuous-relaxation-based method, Relax, in scenarios with moderate noises. We believe that the additional experiments adequately validate the performance of our method in practice and hope the reviewer will appreciate them.
> **Limitation.** The authors admit that solving real-world instances requires tailored methods for learning predictions. Therefore, it is not clear when one can benefit from the framework the authors propose.
We would like to clarify that tailored learning methods are not always necessary. Predictions learned via standard online subgradient descent, as in our experiments, are helpful enough to reduce the computation cost of solving real-world instances, which the additional experiments also validate. We intended to express that such tailored learning methods could further enhance performance. We apologize for the misleading description.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response and the new sets of experiments. They certainly contribute to a better understanding of the issues I had raised.
---
Reply to Comment 1.1.1:
Title: Gratitude for Your Feedback
Comment: We sincerely appreciate your swift feedback and the effort dedicated to the review process. It's a privilege to interact with a responsive and considerate reviewer like you. We're pleased to know that our response and additional experiments have addressed your concerns.
In accordance with the reviewer guidelines, which mention, "If your evaluation of the paper has changed, please revise your review and explain the change," we kindly request that you consider updating your review if our response has affected your evaluation. Should you have any further questions or concerns, please do not hesitate to share them with us. | Rebuttal 1:
Rebuttal: # Global response on experiments and technical novelty
We sincerely thank all reviewers for providing valuable feedback. Given the mixed reviews, we deem it necessary to begin by addressing key comments. Below, we address comments on experiments and technical novelty.
## Experiments
First, we recap our experimental results. In Figure 3, the x- and y-axes indicate the numbers of past instances observed and of iterations taken by the greedy algorithm (Algorithm 1) to solve the new $t$th instance, respectively. Increasing x-axis values imply that predictions $\hat x$ better approximate optimal solutions $x^*$ in expectation (cf. Proposition 3.2). Our proposed method, denoted by "Learn," effectively reduces the number of iterations as the number of observed instances grows, supporting our $\\|x^* - \hat x\\|_1$-dependent time complexity bounds.
We also emphasize that the "Relax" benchmark method is a strong competitor, which follows the same continuous-relaxation idea as the fastest method for Laminar with quadratic objectives [30]. As Reviewer [SFP3](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=ci542rY7dc) mentioned, it is noteworthy that our experiments have shown the potential to outperform Relax by using predictions. We could also implement and test other existing methods. However, while we are interested in how predictions affect the number of iterations of the greedy algorithm, other methods do not necessarily take the form of a similar iterative method, making a fair comparison difficult. Even if we were to compare based on running time, the results would be influenced by implementation and could not yield informative outcomes.
We conducted additional experiments to address the concern about the lack of experiments with standard datasets. While public datasets are scarce in this domain, we have found that Wu et al. [47] have made their code for the Nested problem available. As in [47, Section 6.3], there are three types of objective functions: "F," "CRASH," and "FUEL." F is a synthetic fourth-order polynomial, while CRASH and FUEL come from real-world project crashing and ship speed optimization, respectively. Following [47], we obtained Nested instances with those objectives and generated $T=100$ instances by perturbing parameters defining constraints and objectives with Gaussian noises multiplied by $\sigma = 0.1, 1.0, 10.0$, which controls the noise strength. As with Section 5, we measured the number of iterations of the greedy algorithm initialized by Cold (the cold-start baseline), Relax, and Learn over the $100$ instances.
The results are shown in the attached PDF file. For CRASH and FUEL with $\sigma = 0.1, 1.0$, Learn outperformed Relax, as with the results in Section 5. As for F, Relax performed extremely well and beat Learn, probably because the synthetic fourth-order objective is easy to handle for the continuous-relaxation method. Still, it is significant that Learn can surpass Relax for CRASH and FUEL, which come from real-world applications, under moderate noises. We believe these results effectively address reviewers' concerns about the performance of our method in practice.
## Technical novelty
Reviewers [NNSF](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=Ae4ha8dz3k) and [xsf6](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=CzMG1jdMzT) have expressed concerns about the perceived overlap with (Sakaue and Oki, 2022). Sakaue and Oki (2022) have studied warm-starts with predictions for L-convex minimization, and the high-level idea of using predictions for discrete convex minimization is similar to ours. However, M-convex minimization (which includes the Box, Nested, and Laminar problems addressed in our paper) is fundamentally different from L-convex minimization (which includes bipartite matching, min-cost flow, etc.). This is why they are named and studied separately in discrete convex analysis [31]. In particular, the definitions of feasible regions (or convex sets) and locally steepest directions entirely differ between M- and L-convex cases. Consequently, new methods for projection and finding directions for M-convex minimization are required, which we have developed for Laminar, Nested, and Box in Section 4.
Reviewers [xgTQ](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=BpSA9YYtSV) and [pZm4](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=2mlaXbCFul) have suggested that our technical contribution is limited since Theorem 3.1 is straightforward given Proposition 2.2 presented in [44, Corollary 4.2]. We emphasize that our technical contribution is not only Theorem 3.1 but, more importantly, developing fast algorithms with predictions for Laminar, Nested, and Box in Section 4 using Theorem 3.1.
Specifically, to obtain our $\mathrm{O}(n \\|x^* - \hat x\\|_1)$-time algorithm for Laminar, we have developed
- an $\mathrm{O}(n)$-time projection method by carefully modifying the $\mathrm{O}(n\log^2 n)$-time convex min-sum convolution [45] (see Section 4.1) and
- an $\mathrm{O}(n)$-time dynamic programming method for computing the steepest descent direction by refining the method in [30] (see Section 4.2). (The direct application of [30] costs $\mathrm{O}(n^2)$ time.)
As for Box, we achieved faster $\mathrm{O}(\log n)$-time direction finding in Section 4.3 by using the min-heap technique. While the technique is standard, this improvement is crucial for our method for Box to enjoy the potential to go beyond the lower-bound result [19] with moderately accurate predictions, as discussed in Section 1.1. Our polynomial-time projection and direction finding for general M-convex minimization in Section 3.1 and Appendix A.1 also involve substantial technical sophistication.
All in all, despite the existence of (Sakaue and Oki, 2022) and other existing studies [30, 44, 45], our work has significant technical novelty, which is crucial for the potential of our time complexity bounds in Table 1 to surpass the best worst-case and lower-bound results.
Pdf: /pdf/5f2a18fc00691f8ff218f8419108c41e7c8cb031.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies some classes of $M$-convex minimization problems, to which the recent framework of "warm-starts with predictions" is applied. The paper provides provable time complexity bounds on the standard greedy algorithm for $M$-convex minimization where the bounds are dependent on the $\ell_1$-distance between the optimal solution and the predicted initial solution. The theoretical performance guarantees are promising and improve upon the existing methods that are not using predictions. At the same time, however, one would argue that the technical contributions of this paper are limited in that the results rely on and are deduced by applications of the existing results in the literature. Furthermore, one would be interested to see the numerical impact of the framework on more concrete problem settings, e.g., portfolio management and resource allocation, which were not tested in this paper.
Strengths: * The time complexity bounds provided in this paper are the first results that analyze the performance of the framework of warm-starts with predictions applied to $M$-convex function minimization.
* The time complexity bounds improve upon the existing bounds for some classes of $M$-convex minimization problems when we may obtain an accurate prediction where the accuracy is measured by the $\ell_1$-distance to the (unique) optimal solution.
Weaknesses: * The technical contributions of this paper are limited. The greedy algorithm and main results of this paper are built upon and follow from [(Shioura (2022), Corollary 4.2)](https://pubsonline.informs.org/doi/abs/10.1287/moor.2021.1180) which gives an upper bound on the number of required iterations for the greedy algorithm in terms of the proximity term. The rest of the results are basically about bounding $T_{\text{init}}$, the time required to convert a prediction to a feasible solution, and $T_{\text{loc}}$, the time bound for computing a locally steepest direction. Even these results follow from standard techniques in the literature.
* This paper lacks computational demonstration. $M$-convex minimization has applications in resource allocation, equilibrium analysis, and portfolio management, but none of these problems were tested. In particular, one would be interested in how well the framework of this paper performs for the operations management models studied in [(Chen and Li, (2021))](https://pubsonline.informs.org/doi/abs/10.1287/opre.2020.2070). Furthermore, the numerical results reported in Section 6 do not consider the methods against which the theoretical complexity bounds are compared.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * When testing the performance of this paper's framework against the methods without predictions, would it be fair to consider the time to compute a good prediction? Proposition 3.2 provides the regret bound by the standard online learning method, but what would be the required number of iterations to deduce the desired proximity bound on $\|x^*-\hat x\|_1$? Can you compare the total time required by the framework of this paper, adding up the time for learning a good prediction and the time to solve the problem with the warm start, against the methods without predictions?
* You mention that the required framework for $M^\natural$-convex minimization is similar to that for $M$-convex minimization. Can you provide more details?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful comments. We are delighted that the reviewer has found our improvements using predictions upon existing methods promising. We respond to each comment below.
### On weaknesses
> The technical contributions of this paper are limited. The greedy algorithm and main results of this paper are built upon and follow from (Shioura (2022), Corollary 4.2) which gives an upper bound on the number of required iterations for the greedy algorithm in terms of the proximity term. The rest of the results are basically about bounding $T_{\text{init}}$, the time required to convert a prediction to a feasible solution, and $T_{\text{loc}}$, the time bound for computing a locally steepest direction. Even these results follow from standard techniques in the literature.
While deriving the general bound (Theorem 3.1) from (Shioura (2022), Corollary 4.2) is not difficult, our techniques used in Section 4 for bounding $T_{\text{init}}$ and $T_{\text{loc}}$ are not straightforward, as detailed in [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD). Specifically, we have modified the $\mathrm{O}(n\log^2 n)$-time convolution (Teng and Luo, 1996) to achieve $T_{\text{init}} = \mathrm{O}(n)$-time projection, as in Section 4.1. Furthermore, we have improved the direction-finding method of (Moriguchi et al. 2011) to achieve $T_{\text{loc}} = \mathrm{O}(n)$-time direction finding (where the direct use of the original one results in $T_{\text{loc}} = \mathrm{O}(n^2)$, as discussed in lines 311--313). Moreover, our polynomial-time projection and direction-finding methods for the general $\text{M}$-convex case require non-trivial techniques on the base polyhedron of submodular functions, as discussed in Section 3.1 and Appendix A.1.
> This paper lacks computational demonstration.
$\text{M}$-convex minimization has applications in resource allocation, equilibrium analysis, and portfolio management, but none of these problems were tested. In particular, one would be interested in how well the framework of this paper performs for the operations management models studied in (Chen and Li, (2021)). Furthermore, the numerical results reported in Section 6 do not consider the methods against which the theoretical complexity bounds are compared.
We conducted additional experiments using Nested instances from [47], which involve two real-world settings, project crashing (CRASH) and ship speed optimization (FUEL), and one synthetic setting (F). Please refer to our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for details. Under the CRASH and FUEL settings with moderate noises, our method, Learn, outperformed the continuous-relaxation-based method, Relax, as with the results in Section 5. We believe these additional experimental results strengthen the practical side of our work and hope the reviewer will appreciate them.
Unfortunately, we could not find public datasets for operation management models in (Chen and Li, 2021). Also, we employed the continuous-relaxation method (Relax) rather than the theoretically fast methods for the convenience of fair comparisons based on the number of iterations of the greedy algorithm, as detailed in [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD).
### On questions
> When testing the performance of this paper's framework against the methods without predictions, would it be fair to consider the time to compute a good prediction? Proposition 3.2 provides the regret bound by the standard online learning method, but what would be the required number of iterations to deduce the desired proximity bound on? Can you compare the total time required by the framework of this paper, adding up the time for learning a good prediction and the time to solve the problem with the warm start, against the methods without predictions?
In the literature [11, 7, 39, 35, 10], it is customary and regarded as fair to consider only the time to solve new instances when comparing against the methods without predictions; i.e., we assume predictions are given. This is because data of past instances is already provided, and learning a prediction $\hat x$ from it is usually allowed to take much longer than the time for solving an upcoming instance. That is, only $T_\text{init} + T_\text{loc}\\| x^* - \hat x\\|_1$ matters when we focus on solving upcoming instances quickly, and we learn $\hat x$ beforehand to make $\\| x^* - \hat x\\|_1$ small.
Furthermore, in our case, predictions are learned via online subgradient descent whose *single* iteration constitutes the time for learning a prediction at each $t$ (see lines 369--371). This increase in the total computation time is negligible compared to the hundreds of iterations of the greedy algorithm (Algorithm 1) on each $t$th instance.
> You mention that the required framework for $\text{M}$-convex minimization is similar to that for $\text{M}^\natural$-convex minimization. Can you provide more details?
There is a one-to-one correspondence between $\text{M}^\natural$-convex functions of $n$ variables and $\text{M}$-convex functions of $n+1$ variables, as described in Section 6.1 in [31]. Indeed, for an $\text{M}^\natural$-convex function $f: \mathbb{Z}^N \to \mathbb{R} \cup \{+\infty\}$, we can construct an $\text{M}$-convex function $\tilde{f}: \mathbb{Z} \times \mathbb{Z}^N \to \mathbb{R} \cup \{+\infty\}$ by setting $\tilde{f}(x_0, x) = f(x)$ if $x_0 = -x(N)$ and $\tilde{f}(x_0, x) = +\infty$ otherwise for $x_0 \in \mathbb{Z}$ and $x \in \mathbb{Z}^N$. Clearly, $x \in \mathbb{Z}^N$ minimizes $f$ if and only if $(-x(N), x)$ minimizes $\tilde{f}$.
Thus, we can apply our warm-start framework to $\text{M}$-convex $\tilde{f}$ to solve the original $\text{M}^\natural$-convex minimization of $f$. Alternatively, we can slightly modify our framework to deal with $\text{M}^\natural$-convex minimization directly without using $\tilde{f}$.
---
Rebuttal Comment 1.1:
Comment: As the reviewer did not yet respond, to the author response, I will provide additional information and feedback.
Regarding the technical novelty. I believe the author response to be satisfactory. The response regarding establishing the linear worst-case time complexity dispels the reviewer's comment.
Regarding additional experiments, I remain neutral. I believe the NESTED case adds additional results strengthening the claims, but I do agree with the reviewer's comments about wider study of application. However, as these experiments are comparable to previous work I am familiar in this area, I do not believe this to be an overly negative factor for the paper.
I believe the response to the questions is also helpful.
Thank you
---
Reply to Comment 1.1.1:
Title: Gratitude to the Area Chair for Valuable Feedback
Comment: We sincerely thank the Area Chair for their insightful feedback on our paper. It's reassuring to know that the Area Chair has understood our response addressing the reviewer's comment on the technical novelty. Regarding the experiments, we understand that even with the additions, there remains room for improvement. Still, we believe that they effectively complement the practical side of our theoretical findings. Once again, our genuine appreciation goes out to the Area Chair's unwavering dedication to the review process and valuable feedback on our work.
---
Rebuttal Comment 1.2:
Comment: Thank you so much for the detailed comments on my earlier review report and apologies for my late response. I agree with the area chair's judgment that the paper has more technical significance than my earlier evaluation. I also appreciate the additional numerical experiments conducted for a short period of time. That said, I raised my score from 6 to 7.
---
Reply to Comment 1.2.1:
Title: Gratitude for Your Feedback and Re-evaluation
Comment: We deeply appreciate the reviewer's valuable feedback. We fully understand that the discussion period is often extremely busy. We are truly grateful that despite this, you took the time to review our response and additional experiments, provide feedback, and revise the score. Thank you very much. | Summary: The paper discusses the growing interest in accelerating optimization algorithms using machine-learned predictions. It highlights the work of Sakaue and Oki, who introduced a general framework for employing predictions to warm-start the L-convex function minimization method, demonstrating its effectiveness for various discrete optimization problems. Building on this, the paper presents a new framework that leverages predictions to accelerate M-convex function minimization with improved time complexity bounds, thereby extending the applicability of predictive techniques to a wider range of discrete optimization algorithms.
Strengths: This paper is well-written, presenting intuitive results. It exhibits technical solidity, providing clear explanations of how the proposed algorithm enhances the existing worst-case time complexity.
Weaknesses: 1. The main theoretical results appear to be straightforward. The computational time complexity relies on the distance between the initialization and the optimal solution, assuming a good initialization is provided. However, one limitation of this paper is the absence of an analysis regarding the methodology for obtaining such a good initialization.
2. The difficulty in deriving the corresponding theoretical results is not well elucidated in the paper. For instance, proving Theorem 3.1 hinges on Proposition 2.2, which is simply a direct application from existing works [44]. It would be more beneficial if the paper provided a clearer explanation of the technical challenges involved in obtaining the theoretical results.
3. This paper lacks numerical justifications for the proposed theoretical time complexity bound.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Could the authors present concrete evidence illustrating how the theoretical bounds offer valuable insights capable of significantly accelerating the practical applications using the proposed algorithm?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing valuable feedback. We present our response to each comment below.
> **Weakness 1.** The main theoretical results appear to be straightforward. The computational time complexity relies on the distance between the initialization and the optimal solution, assuming a good initialization is provided. However, one limitation of this paper is the absence of an analysis regarding the methodology for obtaining such a good initialization.
We would like to clarify that our contribution is not merely deriving the general distance-dependent bound (Theorem 3.1). More important results are specific time complexity bounds for the Laminar, Nested, and Box problems (Table 1), which are significant in the literature [11, 7, 39, 35, 10] as they demonstrate the possibility of surpassing the best worst-case bounds and even a lower bound [19] using predictions, as discussed in Section 1.1. To achieve those bounds, we have developed fast projection and direction-finding methods and combined them with Theorem 3.1, as discussed in Section 4. Please refer to our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for further details of our technical novelty for developing those algorithms.
We also point out that the procedure for obtaining a good initialization (or learning a prediction) is covered in lines 169--187. As we have discussed there, we can use the same online subgradient descent method as [23] for our purpose, which enjoys the regret and sample-complexity bounds as stated in Proposition 3.2.
> **Weakness 2.** The difficulty in deriving the corresponding theoretical results is not well elucidated in the paper. For instance, proving Theorem 3.1 hinges on Proposition 2.2, which is simply a direct application from existing works [44]. It would be more beneficial if the paper provided a clearer explanation of the technical challenges involved in obtaining the theoretical results.
We would like to re-emphasize that Theorem 3.1 is not our primary result. Our main results are the time complexity bounds for the Laminar, Nested, and Box problems, summarized in Table 1, and the core technical challenges reside in devising efficient algorithms for those problems, as described in Section 4.
> **Weakness 3.** This paper lacks numerical justifications for the proposed theoretical time complexity bound.
We conducted additional experiments using Nested instances from [47]. Please refer to our [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) for details. Results from the additional experiments, along with the original ones in Section 5, indicate that by learning predictions $\hat x$ from more past instances to approximate optimal solutions $x^*$ better in expectation, we can reduce the number of iterations of the greedy algorithm, thus justifying our theoretical $\\|x^* - \hat x\\|_1$-dependent time complexity bounds.
> **Question.** Could the authors present concrete evidence illustrating how the theoretical bounds offer valuable insights capable of significantly accelerating the practical applications using the proposed algorithm?
The experiments in Section 5 and the additional ones in the [global response](https://openreview.net/forum?id=xtQ9IGRzIW¬eId=3iQVwbCgqD) offer concrete evidence that the computational cost decreases in practice as predictions become more accurate, aligning directly with our $\\|x^* - \hat x\\|_1$-dependent time complexity bounds. More precisely, the number of iterations of the greedy algorithm (y-axis) decreases as the number of past instances observed (x-axis) increases, where the increasing number of observed instances enables learning of predictions $\hat x$ closer to optimal solutions $x^*$ in expectation. In essence, our theoretical bounds offer a practical implication that learning better predictions on larger datasets accelerates the greedy algorithm, which the experiments validate empirically.
As previously mentioned, Proposition 3.2 ensures that we can learn predictions $\hat x$ that are close to the best-performing prediction $\hat x^*$ given enough past instances. Specifically, we can use the online subgradient descent method to learn such predictions; please also refer to lines 369--372 for the learning procedure used in our experiments.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification and additional experiments
Comment: I appreciate the authors' efforts in providing further clarification and conducting additional experiments. While it's possible that my own limited intuition and familiarity with the field could be contributing to this, I still find it challenging to grasp the technical novelty inherent in the theoretical results presented in this paper. Nevertheless, with the inclusion of supplementary experiments that validate the proposed theoretical findings, I have revised my initial score from 4 to 5.
---
Reply to Comment 1.1.1:
Title: Gratitude for Your Feedback and Re-evaluation
Comment: We sincerely appreciate your thoughtful reconsideration of our work, especially in light of the additional experiments we conducted. Your dedication to the review process is evident, and we are truly grateful for the improved score.
We acknowledge that fully grasping the novelty of our research can pose challenges. For a complete understanding, one needs to delve into various existing techniques [30, 44, 45] and the recent study by Sakaue and Oki (2022). Nevertheless, we are confident in the significance of our results, especially in the context of "acceleration of algorithms with machine-learned predictions," established by the seminal work by Dinitz et al. [11]. Our experiments also corroborate the results.
Once again, we deeply value your earnest engagement with our paper and your thoughtful response to our rebuttal. Should you have any further questions or concerns, please do not hesitate to share them with us. | null | null | null | null |
Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach | Accept (spotlight) | Summary: Drawing inspiration from physics principles, the paper proposes the use of conservative Hamiltonian neural flows to construct GNNs that are robust against adversarial attacks. The adversarial robustness of different neural flow GNNs is empirically evaluated on several benchmark datasets, considering a variety of adversarial attacks.
Strengths: This paper investigates the vulnerability of graph neural networks (GNNs) to adversarial perturbations that affect both node features and graph topology. The study focuses on GNNs derived from various neural flows and explores their connection to different stability notions.
Weaknesses: The followings are four weaknesses.
1. Paper's writing quality is poor due to excessive amount of equations, formulas, and its explanation in a view of physics. The authors should use the adversarial research language to bring out the main insight for why this work is important for adversarial research.
2. In addition, numerous words need to penetrate the core idea of equations rather than to revolve around technical terms.
3. The reviewer did not see the connection bridging the gap between Hamiltonian mechanics and energy conservation.
4. Is there any other simpler option that possess the notion of conservation other than Hamiltonian mechanics at all? In addition, why is it important to conserve energy in a view of topological structure?
Overall, the reviewer recommends the authors to answer these questions using insightful language at least to the reviewer, not using mathematical or physical language. The reviewer does not want a theoretical proof, only want really contributional insight; Why is it important to consider energy or conservation and why are "energy and topology" bridged? Why is the hamiltonian important? Please answer using adversarial language
---
The authors' rebuttal has fully explained the connection between the energy conservation in the realm of the Hamiltonian and the adversarial robustness of GNNs using the adversarial research language. Thank the authors for their thorough commentary on the reviewer's point.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Refer to Weaknesses
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 3 good
Limitations: Refer to Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Improve Paper Writing: Weakness 1 and 2
We greatly value your feedback. Our paper delves into the relationship between the stability of graph ODE models and adversarial robustness. This connection is elaborated upon in segments such as Remark 2, lines 116-125, 165-180, and 280-285, where we correlate system stability with adversarial resistance.
To enhance clarity:
1. Appendix Inclusion: Some secondary equations will be moved to an appendix, ensuring the main content remains focused.
2. Restrained Physics Emphasis: We will succinctly present physics-related sections, especially Section 4.1, while retaining essential insights. In-depth physics discussions will be directed to the appendix.
3. Highlighting Adversarial Context: We will provide more intuitive explanations for the choice of Hamiltonian mechanics and how it contributes to combatting adversarial attacks. Some example answers have been provided in our following responses, which will be further expanded in our revision.
# Hamiltonian Mechanics, Energy Conservation and Robustness: Weakness 3, 4 and 5
Thank you for raising this point, and we are grateful for the opportunity to provide further clarity.
Hamiltonian mechanics is not just a theoretical underpinning in our work but a core mechanism to underscore conservation principles. Specifically, the energy conservation of our system, represented as $H_{n e t}$ in equation (8), remains invariant over time, notably in the context of graph ODE models. To shed more light on this:
\begin{align*}
\frac{\mathrm{d} H\_{net}}{\mathrm{~d} t} & =\sum\_{i=1}\^n \frac{\partial H\_{net}}{\partial q\_i} \dot{q}\_i+\sum\_{i=1}\^n \frac{\partial H_{net}}{\partial p\_i} \dot{p}\_i\\\\
& =\sum\_{i=1}\^n \frac{\partial H\_{net}}{\partial q\_i} \frac{\partial H\_{net}}{\partial p\_i}+\sum\_{i=1}\^n \frac{\partial H\_{net}}{\partial p\_i}\left(-\frac{\partial H\_{net}}{\partial q\_i}\right)\\\\
& =0.
\end{align*}
What this means is that $H_{net}$ retains its value as $t$ changes, as stated by Theorem 2. Now, in the realm of physics, $H_{n e t}$ is interpreted differently across varied contexts, say as mechanical energy in a pendulum or as energy in an electron's charged field. In our framework, we model $H_{n e t}$ using a neural network model that ingests the full graph data and yields a scalar. This scalar, by design, remains invariant over time, echoing the conservation principle rooted in Hamiltonian mechanics.
For a clearer insight into the energy concept within HANG, consider the "time" in graph neural flows as analogous to the "layers" in standard GNNs (note that ODE solvers basically discretize the "time", which indeed approximately turns the model into a layered one). Here, the feature vector $\mathbf{q}(t)$ and the momentum vector $\mathbf{p}(t)$ evolve with time, bound tightly by equation (9). Given that $\mathbf{q}(t)$ mirrors the node features at layer $t, \mathbf{p}(t)$ can be understood as the variation in node features over time - essentially, the evolution of node features between successive layers. Thus, our defined energy interweaves both the node feature and its rate of change across adjacent layers. The constant $H_{n e t}$ implies inherent constraints on the node features and their alteration pace over layers.
Because $H_{net}$ processes the whole graph data and yields a scalar, it serves as a constraint on the global graph feature and its variation, which we opine to be crucial in countering adversarial attacks.
Furthermore, by rewriting equation (9) in the following form,
\begin{align}
\left[\begin{array}{c}
\dot{q}(t) \\\\
\dot{p}(t)
\end{array}\right]
&=
\mathbf{M}
\left[\begin{array}{c}
\frac{\partial H_{\mathrm{net}}}{\partial p}\\\\
\frac{\partial H_{\mathrm{net}}}{\partial q}
\end{array}\right],
\end{align}
where $\mathbf{M}=\left[\begin{array}{cc}
\mathbf{0} & \mathbf{I}\\\\
-\mathbf{I} & \mathbf{0}
\end{array}\right]$ is a real antisymmetric matrix and its eigenvalues (real parts) are all zeros,
we can see that the antisymmetric matrix $\mathbf{M}$ leads to a rotation in feature space $\left[\begin{array}{c}
q(t) \\
p(t)
\end{array}\right]$. This implies that the norm of a node remains constant over time while the phase of the feature may change. This property preserves distances between features of different nodes when their norms differ at the initial point. This, in turn, ensures a well-posed and robust forward propagation and learning process. Figure 1 in the main paper illustrates this property.
From an adversarial perspective, the attacker modifies either the node features or the underlying graph topology. These modifications are propagated through multiple aggregation steps, such as layers in conventional GNNs or integrals in graph ODEs.
Here is where the Hamiltonian energy conservation property (as mentioned in our previous response) plays a crucial role. While the Hamiltonian considers the energy of the entire graph, adversarial attacks often target localized regions of the graph. The inherent global energy perspective of the Hamiltonian system makes it resilient to such localized attacks, as local perturbations often get "absorbed" or "mitigated" when viewed from the perspective of the entire system. When adversarial perturbations are introduced, they might indeed tweak the instantaneous features of certain nodes. However, the challenge lies in modifying the trajectory (or evolution) of these nodes (positions $p(t)$ and the variations $q(t)$) in the phase space in a manner that's aligned with the rest of the graph, all while upholding the energy conservation constraints. This feat is arduous, if not impossible, without creating detectable inconsistencies elsewhere.
This property ensures that the energy of each node feature is preserved over time and multiple aggregation steps. As a result, the distances between features of different nodes are preserved if their norms differ initially before the adversarial attack.
---
Rebuttal 2:
Title: Thanks to Reviewer Xukb
Comment: It's truly encouraging to know that our rebuttal addressed your concerns effectively! We genuinely value your insightful feedback and are grateful for your positive recognition of our work! | Summary: This paper proposes a robust GNN model by leveraging the notion of Hamiltonian Energy Conservation. Specifically, authors first analyze the stabilities and limitations of several neural ODE-based GNNs, which motivate the proposed model HANG that is inspired by Hamiltonian classical mechanics. Experimental results indicate that HANG outperforms prior ODE-based GNNs against various adversarial attacks on several realistic graph datasets.
Strengths: - The paper is well written. All technical steps are easy to follow.
- Authors propose a novel approach to improve GNN robustness.
- Authors clearly motivate the proposed approach by analyzing the stability limitations of prior ODE-based GNNs.
- The proposed model has been evaluated against various adversarial attacks.
Weaknesses: - The major issue of this work is that it only compares with a limited number of (and relatively weak) defense baselines. There are some recent methods such as [1, 2] showing better results than the baselines chosen in this paper. It would further improve the paper if authors could compare HANG with those methods.
- Figure 1 only shows the norm of node features become closer as time goes, which does not support authors' claim that node features gradually become similar (lines 182-183). Authors may want to show the norm of relative difference of node features is approaching to zero instead.
[1]: Geisler et al., “Robustness of Graph Neural Networks at Scale”, NeurIPS'21. \
[2]: Deng et al., “GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks”, LoG'22.
I'm willing to raise the score if my major concerns are addressed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is it possible to further improve GNN robustness by combining HANG with some prior defense methods (e.g. graph preprocessing-based methods), as they improve robustness in orthogonal ways?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the Weaknesses section above for limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: |Dataset|Attack|HANG|HANG-GUARD|HANG-quad|HANG-quad-GUARD|
|-|-|-|-|-|-|
|Cora|clean|87.13±0.86|86.54±0.57|79.68±0.62|81.23±0.70|
||PGD|78.37±1.84|86.23±0.55|79.05±0.42|80.91±0.67|
||TDGIA|79.76±0.99|85.56±0.34|79.54±0.65|81.11±0.76|
||MetaGIA|77.48±1.02|86.0±0.60|78.28±0.56|80.10±0.53|
|Citeseer|clean|74.11±0.62|75.95±0.66|71.85±0.48|73.15±0.61|
||PGD|72.31±1.16|75.38±0.82|71.07±0.41|73.07±0.63|
||TDGIA|72.12±0.52|74.54±0.69|71.69±0.40|73.04±0.52|
||MetaGIA|72.92±0.66|75.22±0.66|71.60±0.48|73.11±0.45|
|Pubmed|clean|89.93±0.27|89.96±0.25|88.10±0.33|88.93±0.18|
||PGD|81.81±1.94|87.72±0.89|87.69±0.57|88.99±0.11|
||TDGIA|86.62±1.05|88.86±0.40|87.55±0.60|88.80±0.12|
||MetaGIA|87.58±0.75|88.23±0.93|87.40±0.62|88.78±0.16|
**Table R6.** Node classification accuracy (%) on graph **injection, evasion, non-targeted** attack in **inductive** learning.
----
# New Baselines[1][2]: Weakness 1
We appreciate the suggestion to broaden our comparison. The updated experiments are detailed subsequently.
However, we wish to emphasize that our paper's primary focus is not solely on defense comparisons. Instead, we're keen on delving into the relationship between stability concepts and robustness **in the realm of graph neural ODE flows.** These stability concepts, intrinsic to graph neural flows, highlight the unique properties of such models. Therefore, while there are newer defense techniques like [1] and [2] with promising performance, our exploration into stability in graph neural flows presents a different facet, distinct from conventional defense methods.
In response to the feedback, we've incorporated additional experiments in ``Table R7``, which include evaluations against defense techniques from [1] and [2].
In our supplementary material Table S5, we present the performance of our model when subject to Nettack. This result can be directly compared with the results from [2], as shown in ``Table R7``.
Moreover, we also added experiments to demonstrate that our method is orthogonal to other defense methods. For example, the method presented in [2], GARNET, is a preprocessing defense mechanism that can be integrated with any GNN model. In our experiments, we employed the HANG model as the backbone and incorporated the GARNET defense. As depicted in ``Table R7``, the robust accuracy of HANG significantly benefits from the integration of GARNET. Furthermore, our combined approach, HANG-GARNET, outperforms the GCN-GARNET combination, underscoring the robustness and efficacy of our proposed model.
The research by [1] introduced a robust aggregation GCN named Soft-Median-GCN. We evaluated this under two specific graph modification attacks, namely Nettack and Mettattack. The outcomes of these evaluations are in ``Table R7`` and ``Table R8``. It can be seen that our proposed models, HANG and HANG-quad, surpass Soft-Median-GCN under similar attack settings. This reaffirms the robustness of our models against such graph modification attacks.
----
|Dataset|Ptb-rate|HANG|HANG-GARNET|HANG-quad|HANG-quad-GARNET|GCN|GCN-GATNET[2]|Soft-Median-GCN[1]|
|-|-|-|-|-|-|-|-|-|
|Cora|1|75.54±3.10|82.41±0.80|76.99±3.16|83.01±0.84|70.06±0.81|79.75±2.35|78.67±2.29|
||2|73.73±3.64|80.84±1.57|76.51±2.60|79.88±0.55|68.60±1.81|79.60±1.50|73.98±1.72|
||3|68.43±4.23|80.48±1.69|73.13±2.85|79.76±0.72|65.04±3.31|74.42±2.06|73.01±1.54|
||4|66.02±2.21|70.0±1.47|72.53±2.14|75.90±0.54|61.69±1.48|69.60±2.67|68.91±1.77|
||5|60.12±3.63|67.83±1.87|68.80±2.55|69.28±1.34|55.66±1.95|67.04±2.05|66.51±1.05|
**Table R7.** Node classification accuracy (%) on graph **Nettack targeted** attack in **transductive** learning.
----
# Visualization: Weakness 2
Thank you for the valuable suggestion. In response to your suggestion, we have revised Figure 1 to provide a more accurate representation of the gradual similarity of node features. The new plot ``Figure R2`` in the uploaded pdf now displays the norm of the difference of node features, which indeed approaches zero as time progresses. This modification aligns with our claim and better conveys the intended message.
# Combining HANG with Other Defenses: Question 1.
We would like to express our gratitude for your insightful suggestion. We recognize the potential benefits of combining defense mechanisms that operate through orthogonal strategies. To explore this, we integrated HANG with the GNNGuard preprocessing-based defense, known for its effectiveness in enhancing GNN robustness [P2] and GARNET, another preprocessing defense mechanism.
We have conducted experiments to evaluate the combined performance of HANG with GNNGuard and GARNET, respectively. The results, as detailed in ``Table R6 and Table R7``, demonstrate that this combination indeed improves the robustness accuracy of our model. This outcome underscores the compatibility of our approach with existing defense strategies and validates the potential for synergistic enhancements.
Finally, it is also noteworthy that HANG can be combined with adversarial training techniques, such as PGD-AT, to further enhance its robustness. We refer the reviewer to the response **Adversarial Training (AT) and New AT Baselines** to **``Reviewer bDPL``** and ``Tables R1 and R2`` for more details.
[P2] GNNGuard: Defending Graph Neural Networks against Adversarial Attacks, NeurIPS,2020.
|Dataset|Ptb-rate(%)|HANG|HANG-quad|Soft-Median-GCN[1]|
|-|-|-|-|-|
|Polblogs|0|94.77±1.07|94.63±1.06|94.69±0.35|
||5|80.19±2.52|94.38±0.82|75.46±0.66|
||10|74.92±4.32|92.46±1.56|75.63±1.34|
||15|71.65±1.34|90.85±2.43|71.60±0.74|
||20|66.27±3.39|89.19±3.72|65.53±0.72|
||25|65.80±2.33|86.89±8.90|64.72±2.30|
|Pubmed|0|85.08±0.20|85.23±0.14|85.22±0.11|
||5|85.08±0.18|85.12±0.18|84.17±0.08|
||10|85.17±0.23|85.05±0.19|81.90±0.21|
||15|85.0±0.22|85.15±0.17|79.05±0.51|
||20|85.20±0.19|85.03±0.19|75.74±0.23|
||25|85.06±0.17|84.99±0.16|72.75±0.39|
**Table R8.** Node classification accuracy (%) on graph **Metattack non-targeted** attack in **transductive** learning.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Thanks for the detailed response and additional experiments, which clearly demonstrate the advantages of the proposed model. I would encourage authors to incorporate the clarification on the primary focus of this paper, as well as new empirical results of HANG+defense methods, in the revision, which would definitely improve the paper.
Since authors have addressed my major concerns, I raise my score to 7.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer ehTk
Comment: We are truly heartened by your feedback and are glad our rebuttal addressed your concerns! Your thoughtful observations will guide our revision. We commit to delineating the primary focus of the paper with greater clarity and will integrate the referenced experiments, along with the incorporation of other dense methods, into the revision. | Summary: This paper explores the robustness of Graph Neural Networks (GNNs) against adversarial attacks. Drawing inspiration from principles in physics, the authors propose a novel model called Hamiltonian Neural Flows for constructing GNN models. The effectiveness of the proposed method is evaluated on various benchmark datasets.
Strengths: The problem addressed in this paper is both significant and intriguing, and the proposed method is well-supported by principles from physics.
Weaknesses: I am a non-expert in physics-based methods, my evaluation is based on my understanding of GNN methods and educated guesses. I suggest AC to ignore my evaluation if there are expert reviewers in the field.
Here are a couple of suggestions to improve the paper:
In my opinion, the comparison with existing baselines seems inadequate. The TDGIA paper was published in 2021, while MetaGIA was published in early 2022 or late 2021. It would be beneficial if the authors could include more recent graph attack papers and compare their proposed method against them.
Additionally, it would be interesting to see a comparison between the proposed method and the Graph Isomorphism Network (GIN).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see cons above
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Clarify and New Attacks:
Thank you for your valuable feedback and suggestions. We truly value the time and effort you've dedicated to reviewing our paper.
To provide some clarity, our work offers a fresh perspective on GNNs' adversarial robustness by probing the stability of graph ODE-based GNNs when viewed as dynamical systems. This insight underlines the critical role energy conservation plays in enhancing robustness. Drawing inspiration from Hamiltonian mechanics, our proposed HANG model paves a novel path to fortify GNNs against adversarial attacks. Importantly, our model can cooperate with other defense methods, such as graph preprocessing or adversarial training, to amplify its robustness. We are optimistic that this approach will spark further investigations in this field.
Regarding your suggestion to include more recent graph attack papers for comparison, we agree that it would be beneficial to our study. New experimental results using the GANI(L) attack from a recent paper [W1] are provided in ``Table R4``. However, this attack appears less potent than the attack strategies we have explored in our paper. As evidenced in ``Table R4``, **HANG consistently exhibits better robustness,** even when faced with this newer attack approach.
Additionally, ``Tables R1 and R2`` in our response to ``Reviewer bDLP`` and ``Tables R6 and R7`` in our feedback to ``Reviewer ehTk`` provide insights into our model's efficacy when combined with adversarial training and other pre-processing defense strategies, respectively.
[W1]. Fang J, Wen H, Wu J, et al. GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections[J]. arXiv preprint arXiv:2210.12598, 2022.
| Dataset | Attack | HANG | HANG-quad | GCN | SGC |
|---------|----------|--------------|--------------|-------|-------|
| Cora | GANI(L) | **77.48±0.92** | 77.37±1.17 | 74.92 | 75.39 |
**Table R4.** Node classification accuracy (%) on graph **injection, posioning, non-targeted GANI** attack in **transductive** learning.
---
# Comparison with GIN:
Thank you for your suggestion regarding the comparison with the Graph Isomorphism Network (GIN). We've taken your suggestion on board and incorporated GIN results into our evaluations, as shown in ``Table R5``. Notably, GIN, in line with other non-ODE-based GNNs, struggles under adversarial attacks. This result again highlights the unique resilience offered by HANG's energy conservation properties.
| Dataset | Attack | HANG | HANG-quad | GIN |
|----------|----------|--------------|--------------|-------------|
| Cora | *clean* | **87.13±0.86** | 79.68±0.62 | 82.63±1.02 |
| | PGD | 78.37±1.84 | **79.05±0.42** | 32.86±0.23 |
| | TDGIA | **79.76±0.99** | 79.54±0.65 | 31.36±0.46 |
| | MetaGIA | 77.48±1.02 | **78.28±0.56** | 32.68±0.21 |
| Citeseer | *clean* | **74.11±0.62** | 71.85±0.48 | 72.48±1.0 |
| | PGD | **72.31±1.16** | 71.07±0.41 | 28.20±3.37 |
| | TDGIA | **72.12±0.52** | 71.69±0.40 | 19.98±1.43 |
| | MetaGIA | **72.92±0.66** | 71.60±0.48 | 32.36±0.72 |
| Pubmed | *clean* | **89.93±0.27** | 88.10±0.33 | 86.46±0.24 |
| | PGD | 81.81±1.94 | **87.69±0.57** | 39.01±0.17 |
| | TDGIA | 86.62±1.05 | **87.55±0.60** | 40.35±1.13 |
| | MetaGIA | **87.58±0.75** | 87.40±0.62 | 40.47±0.41 |
**Table R5.** Node classification accuracy (%) on graph **injection, evasion, non-targeted** attack in **inductive** learning. | Summary: Since neural ordinary differential equation networks can show inherent robustness, in this work the authors try to perform an extensive study on different graph neural flows along with their stability on different stability notions like BIBO stability, Lyapunov stability, Structural stability and Conservative stability. The authors find that the graph neural flows using hamiltonian energy functions can achieve improved empirical adversarial robustness on using black box attacks like PGD and TDGIA. The authors show that designing graph neural flows which can ensure conservation stability along with Lyapunov stability can help in achieving improved adversarial robustness. As common in literature the authors generate a white box attack from a surrogate model and then transfer it to the black box model to testify the robustness of the black box model. Evaluation is done on node injection attacks as well as graph manipulation attacks, and significantly improved robustness is seen in both cases.
Strengths: * I think using graph neural flows to understand if we can achieve inherent robustness without the need for adversarial training is interesting.
* The way the authors have related different stability criteria with robustness is quite interesting.
* The results show significant improvements.
Weaknesses: * There have been many instances where defences claiming to be robust have been later evaded using adaptive attacks [1]. Adversarial training has been the most successful defence strategy and the current successful defences use adversarial training to achieve robustness. Therefore, it is necessary that the robustness of the proposed method is evaluated properly. I think that the surrogate model-based black box attacks are not strong enough to get the worst-case robustness. Therefore, it is important that the evaluation on strong white-box attacks like PGD, PGD with max-margin loss / Carlini and Wagner attack [2] is used to testify robustness, and this evaluation should be carried out in a white-box setting. Further, it is important that the authors share a robust accuracy (both white box and black box) vs constraint threat model plot for the proposed approach, GAT and GAT trained using PGD-AT [3].
* Comparison is done only with models trained using standard training. It is important to include adversarially trained models as baselines. For instance, the authors should include a PGD-AT trained model as a baseline in all the tables.
* In the case of GNNs it is important to ensure that the attack graph remains imperceptible and the attacked nodes in the case of node injection attacks cannot be pruned off. Therefore other than robust accuracy imperceptibility of the graph for a given threat model is also a very important metric to be considered. Similar to [], I request the authors to also include a comparison on the imperceptibility of the attack on the proposed Graph neural flow network and the baselines like GAT. It is important to ensure that the imperceptiblity of the proposed defence is either lower or equal to the baselines against the PGD and TDGIA attacks.
* The authors should also look at these works [4,5] and try to compare their methods with them .
[1] Athalye, Anish et al. “Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples.” International Conference on Machine Learning (2018).
[2] Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks."
[3] Madry, Aleksander et al. “Towards Deep Learning Models Resistant to Adversarial Attacks.”
[4] Li, Jintang et al. “Spectral Adversarial Training for Robust Graph Neural Network.”
[5] Li, Jintang et al. “Spectral Adversarial Training for Robust Graph Neural Network.”
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: t would be great if the authors can address the raised concerns in the weakness section.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes the authors seem to have addressed the limitations. I would suggest the authors to kindly look at the weakness section for suggestions and comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # White Box Attack: Weakness 1
We are grateful for your attention to the robustness evaluation. We **do include the results of the white-box attack in Table S3 of our supplementary material.** The results clearly demonstrate that both HANG and HANG-quad exhibit superior robustness compared to the other baseline models. We thank you for highlighting this, and we will ensure that these results are appropriately emphasized in the main paper as well.
# Adversarial Training (AT) and New AT Baselines: Weakness 2, 3 and 5
Thank you for your suggestion. We agree that comparing our model with adversarially trained models would provide a more comprehensive evaluation of its performance. However, we would like to emphasize the core objective of our paper is to elucidate the inherent stability and robustness of graph-ODE-based models.
It is also noteworthy that our HANG model can be combined with adversarial training techniques, such as PGD-AT, to further enhance its robustness.
To address the reviewer's concern, we carried out more experiments applying **PGD-AT** to both the GAT and HANG models and assessed their robust accuracy under PGD, TDGIA, and METAGIA attacks, encompassing both white-box and black-box scenarios.
As depicted in ``Tables R1 and R2``, PGD-AT does provide an uptick in the robustness of all models. Yet, ``Table R2`` interestingly reveals that the effect of PGD-AT on the GAT model's resilience to the **distinct white-box METAGIA attack** is somewhat limited. This observation underscores the boundaries of PGD-AT's effectiveness, especially when attackers adopt tactics not synonymous with AT techniques.
On the other hand, HANG consistently achieves commendable performance across the spectrum of attacks, with a marked improvement when integrated with PGD-AT. It's noteworthy that HANG-AT largely withstands the onslaught of these attacks, a testament to HANG's inherent robustness.
We would also like to point out that the new baseline [4,5], referencing the same publication, delves into adversarial training from a graph spectral viewpoint. We incorporated the Spectral Adversarial Training (SAT) from [4,5] into our HANG paradigm, giving rise to the HANG-SAT variant. As shown in ``Tables R1 and R2``, the fusion of SAT notably elevates the robustness of HANG against diverse attacks, with minimal post-attack accuracy reduction, further demonstrating HANG's innate resilience.
Furthermore, we have taken the initiative to augment our experimentation by including comparisons with recently proposed defense methods. For further details, please refer to ``Table R6, Table R7, and Table R8.``
# Imperceptibility: Weakness 4
Thank you for your insightful comments regarding the imperceptibility of attacks on GNNs. We understand the importance of this facet in adversarial studies and value the chance to elaborate on its role within our HANG framework.
1. Firstly, it is important to emphasize that the nature of graph imperceptibility largely stems from the attack methodology, as highlighted in references [P1], [P2], and [P3], rather than the targeted model itself. Be it under white-box or black-box conditions, the subtlety of the graph structure post-attack remains consistent, regardless of the model in focus. Were there a new measure of imperceptibility, the research community would likely craft an attack to ensure its adherence.
In our work, the TDGIA, Metattack, and Nettack techniques utilized have built-in measures to ensure imperceptibility to the graph. These measures strictly adhere to the prescribed attack budgets. For a deeper dive into this aspect, we direct your attention to Table S4 in the supplementary material, which showcases experimental results across different attack intensities.
Furthermore, it's pertinent to mention that the design philosophy behind HANG is distinct from defenses that either detect or make alterations to the graph structure.
2. Secondly, we initiated an examination of unnoticeability by conducting a degree distribution test under various threat models. The results, presented in ``Figure R1`` of the attached document, unveil a high degree of structural similarity between the original and modified graph topologies, affirming the imperceptibility we aimed to achieve.
Additionally, reference [P1] sheds light on another advanced imperceptibility metric termed the homophily distribution. Here, homophily denotes the propensity of nodes to establish connections with peers sharing similar attributes or labels (Equation (6) in [P1]).
To provide more clarity, ``Figure R3`` in the uploaded pdf presents a comparison of the homophily distributions before and after the attacks on both HANG and the baseline models.
To rigorously evaluate our approach's resilience, we also incorporated the homophily constraint into the attacks using Harmonious Adversarial Objective (HAO)[P1]. This adaptation has resulted in a notably consistent homophily distribution, evident in the second row of ``Figure R3``. The results in ``Table R3`` demonstrate the enhanced performance of our proposed model over the baselines, emphasizing its robustness even with increased imperceptibility constraints.
[P1] Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng, "Understanding and Improving Graph Injection Attack by Promoting Unnoticeability," ICLR, 2022.
[P2] Fang J, Wen H, Wu J, et al. GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections[J]. arXiv preprint arXiv:2210.12598, 2022.
[P3] Liu Z, Wang G, Luo Y, et al. What Does the Gradient Tell When Attacking the Graph Structure[J]. arXiv preprint arXiv:2208.12815, 2022.
## _Table R1, Table R2, and Table R3 are provided in the top global rebuttal window._
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: I am not sure if the authors have evaluated their method properly on the adaptive attacks. Other answers given by the authors seem to satisfy my concerns and therefore, I would like to raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Apologies for the delayed response.
Thank you for your thoughtful feedback. We appreciate your comments, which have improved our paper.
In our exploration of potential gradient masking, we observed that the accuracy under white-box attacks in Table S3 is lower than under black-box attacks in Table 2. This observation suggests the absence of significant gradient masking within our model. Additionally, as indicated in Table S4, the attack success rate increases consistently with larger distortion, correlating with the gradual decrease in accuracy. These empirical findings, in alignment with the insights provided in section 3.1 of the paper [1] you referenced, indicate that our model may not exhibit pronounced gradient masking.
We acknowledge the importance of this investigation and recognize that more exploration on this topic is warranted for future work.
Thank you once again for your invaluable feedback and insights. Your engagement greatly enriches our work and contributes to its refinement. | Rebuttal 1:
Rebuttal: In response to the reviewers' feedback, we have undertaken several substantial efforts to enhance the quality and comprehensibility of our paper. Here is a summary of the key actions we have taken:
1. **New Experiments**: We have executed a series of new experiments to bolster our findings and comparisons:
- Conducted adversarial training on our model, presenting the results in ``Table R1`` and ``Table R2``.
- Introduced new baselines and defense methods, capturing the outcomes in ``Tables R5, R7 and R8``.
- Explored the implications of a novel graph attack method, reflected in ``Table R4``.
- Explored the synergy of preprocessing defence methods combined with our model, showcased in ``Tables R6 and R7``.
- Implemented more imperceptible attacks ``Table R3`` and visualized their imperceptibility through ``Figures R1 and R3``.
2. **Energy Conservation Visualization**: To underline the energy conservation property inherent to our approach, we introduced a visualization in ``Figure R2`` depicting the norm of feature differences versus time. This illustration effectively highlights the consistent energy preservation in our model.
3. **Intuitive Explanation of Hamiltonian and Energy Conservation**: Recognizing the need for clarity, we have provided a more intuitive explanation regarding the significance of Hamiltonian mechanics and how energy conservation intricately links to adversarial robustness. These insights aim to enhance the reader's understanding of our approach.
4. **Core Contribution Clarification**: We have further clarified and succinctly articulated the core contribution of our paper. This ensures that the reader grasps the pivotal insights and implications of our research.
### New experimental results: **Tables R1 - R3**:
|Dataset|Attack|HANG|HANG-AT|GAT|GAT-AT|HANG-SAT[4][5]|
|-|-|-|-|-|-|-|
|Pubmed|*clean*|**89.93±0.27**|89.45±0.33|87.41±1.73|85.88±0.47|88.38±0.48|
||PGD|81.81±1.94|**89.03±0.19**|48.94±12.99|77.78±8.39|87.65±1.77|
||TDGIA|86.62±1.05|87.16±3.89|47.56±3.11|79.56±5.76|**87.55±2.11**|
||MetaGIA|87.58±0.75|**88.93±0.18**|44.75±2.53|81.86±3.44|88.22±0.22|
**Table R1.** Node classification accuracy (%) on graph **injection, evasion, non-targeted, black-box** attack in **inductive** learning.
----
|Dataset|Attack|HANG|HANG-AT|GAT|GAT-AT|HANG-SAT[4][5]|
|-|-|-|-|-|-|-|
|Pubmed|*clean*|**89.93±0.27**|89.42±0.22|87.41±1.73|85.90±0.44|88.38±0.48|
||PGD|68.62±2.82|**88.86±0.15**|38.04±4.91|80.85±4.51|88.19±0.28|
||TDGIA|69.56±3.16|**88.98±0.20**|24.43±4.10|82.16±4.10|88.28±0.34|
||MetaGIA|84.64±1.20|**88.61±0.20**|40.02±1.34|44.37±6.48|85.76±4.12|
**Table R2.** Node classification accuracy (%) on graph **injection, evasion, non-targeted, white-box** attack in **inductive** learning.
----
| Dataset | Attack | HANG | HANG-quad | GRAND | GAT | GCN |
|---|---|---|---|---|---|---|
| Cora | *clean* | 86.32±0.53 | 80.56±0.65 | 87.62±0.75 | 87.47±0.51 | **87.99±0.45** |
| | TDGIA+HAO | **82.81±0.96** | 80.60±0.65 | 42.21±4.21 | 50.04±15.04 | 35.19±0.96 |
| | MetaGIA+HAO | **81.30±0.74** | 80.05±0.87 | 44.33±1.70 | 41.58±2.83 | 39.95±0.41 |
| Citeseer |*clean*| 74.78±0.71 | 73.22±0.74 | **76.31±0.86** | 72.45±0.89 | 73.22±0.65 |
| | TDGIA+HAO | **73.38±0.35** | 72.38±0.48 | 36.79±2.98 | 31.18±8.62 | 24.41±2.32 |
| | MetaGIA+HAO | 70.05±0.56 | **72.37±0.83** | 20.57±3.59 | 18.11±1.82 | 18.41±0.33 |
| Pubmed |*clean*| **90.04±0.24** | 89.02±0.20 | 88.66±0.17 | 87.44±0.19 | 88.24±0.23 |
| | TDGIA+HAO | 86.19±1.45 | **88.89±0.17** | 42.45±1.20 | 44.35±3.11 | 43.56±1.03 |
| | MetaGIA+HAO | 85.89±0.87 | **88.87±0.79** | 57.49±0.99 | 39.57±0.55 | 39.58±0.05 |
**Table R3.** Node classification accuracy (%) on graph **injection, evasion, non-targeted, black-box** attack in **inductive** learning.
Pdf: /pdf/c1e8c56754cf57876e1749a56727533ab42e8460.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
QuIP: 2-Bit Quantization of Large Language Models With Guarantees | Accept (spotlight) | Summary: This work proposes a unified framework for weight quantization with error feedback together with preprocessing and postprocessing transformation that makes the model more quantization-friendly. Authors derive theoretical bounds on the quantization error and investigate the failure cases of OPTQ quantization. The introduced LDLQ method with incoherence processing is evaluated on quantization of LLM for 2,3 and 4 bit quantization.
Strengths: * Paper proposes a unified view on quantization with the objective of minimization of layer-wise MSE loss that involves well-known OPTQ as a particular case.
* Authors derive explicit average and worst-case bounds on quantization error.
* The introduced incoherence processing is well-motivated and supported by quantitative analysis.
* The demonstrated LDLQ failure case example despite being very different from typical cases occurring in practice is still interesting and shows potential limitations of the optimal quantization methods.
* Different variants of LDLQ achieve strong performance on quantization of models from OPT family at various bit widths considered. The most impressive result is that the model attains reasonable perplexity and zero-shot accuracy for 2 bit quantization, which is known to be very challenging and all the competitive methods experience massive performance drop.
* Overall, the work is well-structured and accompanied with thorough theoretical analysis and empirical study.
Weaknesses: * Method is evaluated only on a single model LLM family. To be sure that the method achieves strong performance on LLM quantization in general one should consider at least one more family of LLM.
* Minor. $U^{\prime}$ in formula (4) is not upper unit triangular but rather strictly upper triangular (I guess $U^{\prime} + I$ is supposed to be upper unit triangular).
* Minor. This will result in each of its eigenvalues being a random unit vector. I guess the authors meant eigenvectors instead of eigenvalues.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How well does LDLQ+QuIP perform on other families of LLM? Would be interesting to evaluate QuIP on LLaMA given the popularity and impressive performance of this model family. Since it is not fully open-sourced one could consider the recent Falcon family (Falcon-7B and Falcon-40B) instead.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Evaluating on more models.**
Thanks for the feedback; we agree it is important to evaluate our quantization method on additional models. At the suggestion of the reviewers, we conducted additional experiments on LLaMa-2-70b, and share preliminary results below. We will include a fully comprehensive set of LLaMa results in the revision. Encouragingly, the message of our paper remains the same. QuIP is able to successfully quantize down to 2 bits, as evaluated on Lambada and PiQA zeroshot tasks.
| | Lambada Accuracy | PiQA Accuracy |
| -- | -- | -- |
| LLaMa-2-70b-chat 2 Bit | 74.02 | 77.48 |
**Minor linear algebra clarification on Eq. (4)**
Thanks for catching the typo, yes $U'$ should be strictly upper triangular.
**Line 170 eigenvalue vs eigenvector clarification.**
Thanks for catching the typo, yes we meant to say eigenvectors.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: After reading the responses, I decided to keep the original score.
LLaMA-2-chat is a good choice for the method validation, but the results provided in response involve only 2-bit compression for a single model on two tasks from `lm-eval-harness`. Individual tasks are known to be quite noisy and limited, therefore, in order to make any definite conclusions about the LLM performance it would be desirable to consider a larger number of tasks (at least 5-6). In addition, I would recommend comparing the performance against the fp16 baseline, since these metrics, without having a reference point (one can indeed check the number of LLaMA-2 paper), are not illustrative. Anyway, given that 2 bits is a challenging target, a significant drop in performance is expected and having reasonable performance at this point is still a good result.
---
Reply to Comment 1.1.1:
Title: Updated results on LLaMa-2
Comment: We conducted a suite of further experiments quantizing Llama-2: 7b and 13b parameter models, both pretrained and chat finetuned. Overall, we’ve verified that our method QuIP works well on this additional model, and can provide a step function improvement in quantization at 2 bits compared to OPTQ.
We adapted code from the GPTQ-for-LLaMa repo, which we use to evaluate WikiText and C4 perplexity. We use the lm-evaluation-harness to evaluate downstream zeroshot accuracy on BoolQ, PiQA, WinoGrande, ARC easy, and ARC challenge.
| Llama-2 Model | Quant Method | Wbits | Wiki | C4 | BOOLQ | PIQA | WinoGrande | ARC-e | ARC-c |
| ------------- | ------------ | ----- | -------- | -------- | ------ | ------ | ---------- | ------ | ------ |
| 13b | Full Precision | 16 | 4.884 | 6.727 | 80.52% | 80.52% | 72.22% | 77.48% | 49.15% |
| | QuIP | 4 | 5.011 | 6.887 | 80.90% | 80.10% | 72.60% | 77.30% | 49.20% |
| | | 3 | 5.340 | 7.343 | 77.40% | 79.10% | 71.10% | 76.30% | 49.00% |
| | | 2 | 10.094 | 13.131 | 63.90% | 69.60% | 57.90% | 55.80% | 31.50% |
| | OPTQ | 4 | 5.203 | 7.060 | 78.80% | 80.40% | 70.00% | 76.10% | 48.80% |
| | | 3 | 6.666 | 8.910 | 74.50% | 76.70% | 69.30% | 69.50% | 42.49% |
| | | 2 | 3086.167 | 406.934 | 40.20% | 48.80% | 48.70% | 27.00% | 27.90% |
| 13b-chat | Full Precision | 16 | 6.108 | 8.489 | 81.71% | 79.11% | 71.27% | 73.74% | 50.26% |
| | QuIP | 4 | 6.275 | 8.733 | 80.80% | 78.40% | 71.20% | 72.50% | 48.60% |
| | | 3 | 6.713 | 9.367 | 78.60% | 77.80% | 70.50% | 71.30% | 47.10% |
| | | 2 | 16.046 | 20.034 | 58.30% | 67.50% | 56.90% | 53.00% | 33.10% |
| | OPTQ | 4 | 6.454 | 8.962 | 79.60% | 78.00% | 70.50% | 72.80% | 49.40% |
| | | 3 | 8.393 | 11.399 | 70.80% | 74.40% | 64.70% | 66.70% | 42.20% |
| | | 2 | 3136.833 | 1138.701 | 41.90% | 47.90% | 47.50% | 25.50% | 29.60% |
| Llama-2 Model | Quant Method | Wbits | Wiki | C4 | BOOLQ | PIQA | WinoGrande | ARC-e | ARC-c |
| ------------- | ------------ | ----- | ------ | -------- | ------ | ------ | ---------- | ------ | ------ |
| 7b | Full Precision | 16 | 5.472 | 7.263 | 77.77% | 79.11% | 69.06% | 74.54% | 46.25% |
| | QuIP | 4 | 5.940 | 8.010 | 75.87% | 77.26% | 67.88% | 71.00% | 42.58% |
| | | 3 | 6.499 | 8.738 | 74.92% | 76.28% | 67.01% | 69.15% | 41.47% |
| | | 2 | 27.125 | 31.333 | 52.72% | 60.66% | 51.70% | 39.14% | 26.19% |
| | OPTQ | 4 | 6.067 | 7.845 | 76.09% | 78.51% | 67.96% | 72.52% | 44.03% |
| | | 3 | 9.505 | 11.956 | 68.84% | 74.32% | 63.38% | 62.16% | 37.37% |
| | | 2 | NaN | 1794.547 | 41.31% | 48.31% | 48.46% | 26.09% | 27.56% |
| 7b-chat | Full Precision | 16 | 7.077 | 9.528 | 80.67% | 76.66% | 66.22% | 69.65% | 44.37% |
| | QuIP | 4 | 7.431 | 10.147 | 80.24% | 76.71% | 66.46% | 67.26% | 42.49% |
| | | 3 | 8.090 | 11.052 | 72.48% | 76.33% | 65.35% | 67.76% | 40.19% |
| | | 2 | 66.586 | 61.662 | 50.18% | 57.67% | 50.43% | 35.27% | 27.90% |
| | OPTQ | 4 | 7.791 | 10.965 | 79.02% | 75.68% | 66.54% | 69.36% | 41.13% |
| | | 3 | 11.847 | 17.736 | 64.25% | 70.67% | 62.19% | 53.62% | 33.62% |
| | | 2 | NaN | NaN | 45.54% | 50.00% | 49.88% | 27.57% | 29.10% | | Summary: The work presented in this paper introduces quantization with incoherence processing, which enables better quantization with fewer bits per parameter. The authors provide a theoretical analysis for adaptive rounding methods and present experimental results demonstrating performance of 2-bit quantization..To do this, the paper introduces the LDLQ adaptive rounding method and demonstrates its optimality compared to other rounding methods which specify linear feedback U for hessians, and rounding to integers (see Theorem 1). The authors define incoherence and demonstrate its effectiveness in achieving a theoretically superior asymptotic bound for LDLQ in terms of the spectrum of H. Additionally, the authors propose efficient pre and post incoherence processing techniques to transform W and H matrices, eliminating the need for nxn matrix multiplications.
Strengths: The paper encompasses numerous interesting new concepts, theorems, and their corresponding proofs. The extensive experimentation serves to prove the authors' claims effectively. The writing is solid, making the paper relatively accessible despite the number of theorems and proofs. Proposed incoherence seems to improve baseline methods, for example OPTQ (table 6,7 appendix). The utilization of the incoherence technique has the potential to contribute significantly to achieve usable 2-bit quantization. They provided code, which is always a plus.
Weaknesses: 1) One significant drawback of the paper is that despite the aforementioned enhancements, the performance of 30-bit 2-bit quantization, although much better than 2 bit OPTQ, still falls short compared to 13b model 4-bit quantization with OPTQ(and even 4 bit RTN), making it practically unusable. This limitation diminishes the practical usability surrounding the work on 2 bit quantization.
2) The concept of incoherence remains unclear until page 4 of the paper, which is problematic considering it is one of the main focal points. I believe the introduction should include a clear definition and intuitive explanation of what incoherence entails. Furthermore, I found it challenging to establish a connection between the intuition provided in line 22 and the definition presented in line 134.
3) It is possible I missed, but it appears that there is no mention or measurement of inference speed presented in the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1)I may have overlooked some details, but it appears that the paper only show how making H incoherent reduce loss upper bounds, nothing about W incoherence. What is benefit for making also W incoherent?
2) In line 169 you write that "to make symmetric matrix incoherent is to conjugate it by uniform random orthogonal matrix". Can you please add citation or explanation why this is true?
3) Again I may have overlooked some details, in 172- 176 you said that the procedure described there makes the matrix H and W incoherent with high probability/ Why is that, and where is the proof?
Suggestions:
1. In line 64, it seems that "OBC" might be a typographical error, and "OBQ" could be the correct term.
2. It would be helpful to include a citation for LDL decomposition.
3.In line 99, it would enhance clarity to present a equation that demonstrates the result obtained when equation (4) is applied in equation (3).
4.In line 217, it appears that "OTPQ" might be a typographical error, and you mean "OPTQ".
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **What’s the best use of bits.**
We conducted a thorough analysis of the experimental data submitted in our paper. The reviewer is accurate in their observation that 2 bit quantization with QuIP is not worthwhile in terms of total memory budget. However, QuIP is the first method we are aware of to achieve decent and usable results at 2 bits quantization. Other methods including OPTQ cannot sensibly quantize at 2 bits.
Our results indicate that QuIP at 3 bit quantization may be the best for a fixed bit budget, assuming our results generalize to other models. Please see the attached figure where we plot total memory vs perplexity/accuracy on the full set of language generation and zeroshot tasks we include in our paper submission. Over all 7 settings and all OPT models up to 30b, QuIP at 3 bits (and 4 bits) quantization achieves a better use of memory than OPTQ quantized to 4 bits, and the full precision models.
Previous quantization methods did not work at 2 bits, but our method gives decent and usable results at this quantization level. Our work indicates that quantizing to 3 bits with QuIP is the best use of a fixed bit budget, and paves the way towards making 2 bit inference viable.
**Improving incoherence explanations.**
Great point, we’ll improve our explanations based on your feedback.
In Definition 1, incoherent Hessian matrices are defined as having sufficiently small eigenvectors. Incoherent weight matrices are defined as having sufficiently small entries. The intuitive explanation in Line 22 states “the weights themselves and the directions in which it is important to have good rounding accuracy are not too large in any one coordinate”. We see the definition of incoherent weight matrices ensures not too large weights. Note the Hessian is in the proxy objective $tr( (\hat{W} - W) H (\hat{W} - W)^T )$, and the eigenvectors of H indicate directions in which rounding accuracy is important. Nagel et. al. [1] also give a nice explanation on the derivation of H.
[1] Markus Nagel, Rana Ali Amjad, Mart Van Baalen, Christos Louizos, and Tijmen Blankevoort. Up or down? adaptive rounding for post-training quantization. In International Conference on Machine Learning, pages 7197–7206. PMLR, 2020.
**Evaluating inference speed.**
This is a valid point. We have developed a non-optimized implementation including a
2-bit CUDA kernel and performed initial inference timing results. Based on these results, the additional overhead of the incoherent matrix multiplies do not translate to a significant slowdown. We conducted sequence generation of length 512 on LLaMa-2-70b quantized to 2 bits, with and without our incoherence processing. Our observations so far indicate that loading the weights onto the GPU dominates the inference time, which is not changed by our method. Further optimizations such as tiling the incoherent matrix multiplies are expected to speed up inference. Section 4.1 also demonstrates why the incoherence processing operations are asymptotically non-dominant.
| | Inference time (s) |
| -- | ------------------------ |
| With incoherent matrix multiplies | 416.575 |
| Without incoherent matrix multiplies | 377.166 |
Experiments conducted on an NVIDIA RTX A6000 GPU.
**What does our theory say about the benefit of making W incoherent?**
Theorem 7 shows that the method in Algorithm 4 is theoretically optimal when rounding to a finite grid. Theorem 7 requires that the weights are incoherent, and ensures this by applying Lemma 5 to make the weights incoherent with high probability. The theorem statement could also be rewritten to require weight incoherence as an assumption.
Beyond our theoretical analysis, weight incoherence provides several empirical benefits. Weight incoherence ensures that every weight entry is essentially the same. Therefore the need for different scale factors per group is reduced.
**Line 169 question on incoherence.**
Conjugating a matrix $H$ by a matrix $U$ results in producing the matrix $U H U^{-1}$, which in the case of orthogonal $U$ is equivalent to $U H U^T$. Let $Q D Q^T$ be the eigendecomposition of $H$. It follows that $(UQ) D (UQ)^T$ is an eigendecomposition of $U H U^T$. If $U$ is drawn uniformly at random from the set of orthogonal matrices, then because $Q$ is also an orthogonal matrix, $U Q$ is also a random variable distributed uniformly on the set of orthogonal matrices. It follows that each column of $U Q$ (aka the eigenvectors of $U H U^T$) is a random variable distributed uniformly on the unit sphere. The entries of such random vector in $n$ dimensions concentrate in magnitude $n^{-1/2}$, and so must the entries of $U Q$. Therefore, $U H U^T$ will (with high probability) be incoherent with a small factor $\mu$, and we can get bounds on this factor $\mu$ using concentration inequalities on the magnitude of entries of a random point on the unit sphere).
We will make sure to improve the explanation in the paper.
**Lines 172-176 incoherence with high probability.**
Please see section 4.1 and Lemma 5 for explanation on how we can ensure the Hessian and weight matrices are incoherent with high probability.
**OBC vs OBQ.**
Sections 3 and 4 of the OPTQ paper reference the “OBQ” method that is proposed within the “OBC” paper.
**LDL citation.**
Thanks, we will include a citation for the LDL decomposition.
**Line 99 better clarity when showing Eq.(4) applied to Eq.(3).**
This is a good point, we will include the additional explanation.
**Line 217 typo.**
Thanks for pointing this typo out, we will fix it.
---
Rebuttal Comment 1.1:
Comment: Sorry for the delay in the response.Thank you for taking time to address my questions. I have read the feedback from other reviewers and the authors' rebuttal.
Regarding Weakness 1: Thank you for pointing out results for 3-bit quantization. I understand the fact that 2-bit quantization is challenging, and making it work somewhat decently is achievement by itself. However, from a practical standpoint, despite the fact that QuIP outperforms competitive methods at 2-bit, it is still impractical since for a given memory budget it is better to take smaller model with higher bit-width. Taking into account, the paper focus on 2-bit quantization, in my opinion this is still major drawback. I think one way to address this, is to make paper more transparent, for example, by including tables with fixed memory budget to compare different options.
Inference speed: Thank you for measuring the inference speed. I read your discussion with Av5n. I would recommend to include to the paper more precise measurements and fair comparison with OPTQ.
Apart from these points, I am satisfied with the authors' response. After revisiting the paper, reading other authors' reviews, and the authors' responses, I would like to increase my score from 4 to 5. | Summary: - The authors coin a family of adaptive rounding methods for layer-wise quantization, establish that the state-of-the-art method GPTQ is optimal within this family, and prove quality guarantees.
- The paper further introduces incoherence preprocessing, together with a Kronecker-factor based inference scheme, which leads to significant accuracy improvements for very low bitwidth compression.
Strengths: - The paper introduces a non-obvious and useful optimization to GPTQ which fully eliminates the matrix inversion while obtaining equivalent results.
- The authors present what seem like the first theoretical guarantees for an adaptive layerwise rounding algorithm.
- The proposed incoherence preprocessing leads to greatly improved results for 2-bit compression. While the idea of multiplying with an orthogonal matrix to produce more uniform and thus easier to compress data is not new, I have not seen it applied in the context of LLM quantization before.
- The paper also studies the impact of some additional heuristics, like greedy post-processing passes on top of GPTQ. While those are mostly small tricks, it is good to have those implemented and evaluated.
- Code is provided for reproducabilty, including also a script to verify the equivalence of their GPTQ optimization.
Weaknesses: - The method is only evaluated on OPT, which is not considered a great LLM anymore by today's standards; good results on e.g. LLaMa would be significantly stronger.
- There do not seem to be any runtime numbers for inference via the proposed efficient Kronecker-factored scheme. I think that the tripling of required FLOPs (2 additional o(n^2) matmuls if I understood this part correctly) may be challenging to implement in practice without significant overheads (both for low- and large-batch inference), which would limit the practicality of incoherence preprocessing.
- The algorithm family seems to be somewhat designed around how GPTQ works and the corresponding optimality is thus not exactly super surprising.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Given that the largest models are usually more robust to quantization, I am wondering how QuiP performs on the 66B and 175B variants, is there any reason why those results are missing?
- How does QuiP perform with groups, which are generally very effective for standard GPTQ / RTN?
- 221: GPTQ only requires one inverse + one Cholesky decomposition
- 282: It would be good to reference that the official GPTQ repository also proposed a similar trick in the context of LLaMa models several months ago.
- The incoherence post-processing code currently returns a matrix that is not quantized anymore, which is a bit confusing. Related to that, I would also suggest to provide pseudocode for the efficient Kronecker-factor inference scheme.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper briefly discusses limitations and broader impact in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Evaluating on more models.**
Thanks for the feedback; we agree it is important to evaluate our quantization method on additional models. At the suggestion of the reviewers, we conducted additional experiments on LLaMa-2-70b, and share preliminary results below. We will include a fully comprehensive set of LLaMa results in the revision. Encouragingly, the message of our paper remains the same. QuIP is able to successfully quantize down to 2 bits, as evaluated on Lambada and PiQA zeroshot tasks.
| | Lambada Accuracy | PiQA Accuracy |
| -- | -- | -- |
| LLaMa-2-70b-chat 2 Bit | 74.02 | 77.48 |
**Evaluating inference speed.**
This is a valid point. We have developed a non-optimized implementation including a
2-bit CUDA kernel and performed initial inference timing results. Based on these results, the additional overhead of the incoherent matrix multiplies do not translate to a significant slowdown. We conducted sequence generation of length 512 on LLaMa-2-70b quantized to 2 bits, with and without our incoherence processing. Our observations so far indicate that loading the weights onto the GPU dominates the inference time, which is not changed by our method. Further optimizations such as tiling the incoherent matrix multiplies are expected to speed up inference. Section 4.1 also demonstrates why the incoherence processing operations are asymptotically non-dominant.
| | Inference time (s) |
| -- | ------------------------ |
| With incoherent matrix multiplies | 416.575 |
| Without incoherent matrix multiplies | 377.166 |
Experiments conducted on an NVIDIA RTX A6000 GPU.
**The algorithm family is somewhat designed around GPTQ, and therefore its optimality is not super surprising.**
We’d like to respectfully push back on this statement. While we believe the LDLQ algorithm formulation and its subsequent in-class optimality proof are intuitive to understand, the equivalence of LDLQ and GPTQ is not intuitive. The GPTQ paper clearly demonstrated its empirical success, but deriving a theoretical proof of optimality based on its method formulation was not clear. Only by showing equivalence to LDLQ could we leverage LDLQ’s optimality to show GPTQ’s in-class optimality, but from looking at their respective formulations this connection is not clear.
**How does QuIP perform on larger OPT models 66B/176B? Why are these missing?**
Due to time and compute constraints we were unable to submit experiments on larger OPT models. We include some results of our method QuIP on OPT-66b evaluated on language generation tasks, and will include a full set of results on 66B and 176B in the revised paper. Our story on these larger OPT models remains the same: with our incoherence processing we are able to achieve good quantization down to 2 bits.
QuIP on OPT-66b:
| W Bits | Wiki | PTB | C4 |
| -------- | ------ | ------ | ---- |
| 16 | 9.34 | 13.36 | 10.99 |
| 4 | 9.35 | 13.39 | 11.03 |
| 3 | 9.45 | 13.55 | 11.17 |
| 2 | 10.64 | 15.68 | 12.67 |
**How does QuIP perform with groups?**
We tried groupsize 32 and 128 on the OPT-350m model. At 2 bits, we see there is a modest improvement from using grouping. There are interesting research questions here regarding the effect of groupsize with our method. Theoretically, we would not expect groupsize to help too much since incoherence processing should be making the weights relatively uniform. We do however see some benefit at 2 bits for OPT-350m, and further exploration is warranted on additional models.
QuIP on OPT-350m with various groupsizes:
| quant method | | wiki | ptb | c4 |
| -------- | --- | ------ | ------ | ------ |
| full | W16 | 22.00 | 31.07 | 22.59 |
| quip | W4 | 22.50 | 32.57 | 23.23 |
| | W3 | 25.19 | 35.65 | 25.48 |
| | W2 | 672.29 | 744.18 | 320.04 |
| quip-groupsize 32 | W4 | 23.05 | 32.57 | 23.39 |
| | W3 | 25.70 | 36.98 | 25.91 |
| | W2 | 197.02 | 244.10 | 133.76 |
| quip-groupsize 128 | W4 | 23.03 | 32.77 | 23.39 |
| | W3 | 25.59 | 36.82 | 25.81 |
| | W2 | 192.59 | 220.44 | 128.27 |
**Clarify number of matrix inverse / Cholesky in GPTQ.**
We realize Algorithm 1 in GPTQ’s paper states only one inverse + one Cholesky decomposition. However, in [GPTQ’s code](https://github.com/IST-DASLab/gptq/blob/2d65066eeb06a5c9ff5184d8cebdf33662c67faf/gptq.py#L101) we can see two calls to `torch.linalg.cholesky(H)`, and one call to `torch.cholesky_inverse(H)`.
**Acknowledge reordering also used in GPTQ.**
Thanks for pointing that out, we will make sure to reference the GPTQ repo.
**Clarify incoherence post-processing, in converting back to float. Suggest psuedocode for Kronecker-factor inference.**
To perform a linear layer forward pass in our current implementation, we (given input activations):
(1) perform a multiplication by each of the input-side Kronecker factor matrices (2 small matrix-matrix multiplies)
(2) multiply the result by the quantized weights using a specialized kernel that does quantized-matrix-vector multiplies without having to decompress (as in GPTQ)
(3) multiply the result by each of the output-side Kronecker factor matrices.
This gives a speedup relative to a baseline of offloading the original 16-bit weight matrix to the CPU and loading it to the GPU on-demand, because loading the weights onto the GPU is very expensive due to limited host-device bandwidth. The additional Kronecker factor matrix multiplies slow us down a bit relative to GPTQ, however the slowdown is small since the work of the extra matrix multiplies is asymptotically small.
We will make sure to include psuedocode and an improved explanation in the paper revision.
---
Rebuttal Comment 1.1:
Title: Post-Rebuttal Comments
Comment: Thank you for the detailed reply!
I am not entirely convinced by your new experimental results, in particular the kernel numbers. 377s for 512 tokens implies ~736ms per token, GPTQ reports 589ms on the same GPU for OPT-175B, a 2x larger model, at FP16. An ideal implementation of a 2-bit kernel for the 2x smaller LLaMa-70B should be close to 16x faster than this (as this is the difference in memory that needs to be loaded, which dominates inference for generation). Hence, your low overheads currently seem to be measured relative to a rather uncompetitive baseline.
As for the two Cholesky decompositions, I believe `torch.linalg.cholesky(H)` and `torch.cholesky_inverse(H)` combine to a symmetric matrix inverse that should be faster and more stable than a general inversion.
Nevertheless, I will maintain my score based on the merits of the work discussed in my initial review.
---
Reply to Comment 1.1.1:
Title: Clarification on timing experiments
Comment: Thanks for your response. We were not precise enough in our original rebuttal; our timing results were to generate a batchsize of 8, with sequences length 512 each. Therefore our baseline model had an inference throughput of 92ms/token, not 736ms/token. While this is faster than GPTQ's 589ms/token for OPT-175B at fp16 precision, we've identified several factors that make a direct comparison difficult.
(1) Our code measures the whole huggingface token generation loop including encoding and decoding, while GPTQ's code just measures the model evaluation. (2) We used 1 GPU, while GPTQ's timing results use 8 GPUs. (3) We used a 2-bit compressed version of Llama-2-70b-chat, while they are using OPT-175B. (4) We used a batch size of 8 while the GPTQ's results used a batch size of 1. | Summary: This paper introduces QuIP, an algorithm for weight quantization in large language models, with theoretical guarantees. The experimental results demonstrate that QuIP achieves nearly lossless performance when using 3-bit quantization for models larger than 3B, and it shows good performance with 2-bit quantization for models larger than 7B.
Strengths: First of all, QuIP is the first post-training quantization algorithm that compresses weights to 2 bits with reasonable loss of performance, significantly pushing the boundary of LLM on-device deployment and making larger LLMs available to ordinary users.
Second, the paper proposes LDLQ rounding method, which is both worst-case and average-case optimal in a family of adaptive rounding methods, i.e. iterative quantization through each column of weights. The family is carefully selected so that OPTQ (previously known as GPTQ) falls within this family.
Third, solid proof is shown that LDLQ is optimal in its family assuming rounding to integers, while a rigid study, with counterexamples and more careful analysis is conducted in more realistic cases in Section 5.2.
Weaknesses: The experiments are mostly conducted on OPT, which might not be the most standard model at the time of reviewing. Models like Llama, Falcon and MPT might be more popular LLMs. However, the reviewer understand the paper was submitted months ago and there might be limited time to conduct all the experiments. Also, the main experiments are conducted on decoder-only auto-regressive language models, while there are other models/domains of interest, for example, FastChat-T5 which is encoder-decoder architecture, Vision transformers, etc.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As mentioned in Section 5.1., OPTQ is equivalent to LDLQ in the class of adaptive rounding methods with linear feedback (proof in Supplement E), and thus is a special case of LDLQ in general. Is there any further case study why OPTQ performs worse in general?
Is there any analysis on quantization outliers under QuIP/LDLQ and how would QuIP handle those cases?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: We believe that considering activation quantization could lead to further improvements in the future. However, the implementation of QuIP as a weight quantization method has already overcome major obstacles and enabled the deployment of LLMs.
Currently, there is a lack of realistic experiments to assess the performance, in terms of compute speed. Nevertheless, implementing and optimizing 2/3-bit performance on CUDA would likely be a highly challenging task, and it could be worthwhile to explore this further in a separate research paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Evaluating on more models.**
Thanks for the feedback; we agree it is important to evaluate our quantization method on additional models. At the suggestion of the reviewers, we conducted additional experiments on LLaMa-2-70b, and share preliminary results below. We will include a fully comprehensive set of LLaMa results in the revision. Encouragingly, the message of our paper remains the same. QuIP is able to successfully quantize down to 2 bits, as evaluated on Lambada and PiQA zeroshot tasks.
| | Lambada Accuracy | PiQA Accuracy |
| -- | -- | -- |
| LLaMa-2-70b-chat 2 Bit | 74.02 | 77.48 |
**Clarification on LDLQ vs OPTQ.**
To clarify, we show that OPTQ and LDLQ are in fact equivalent methods. We develop a general class of quantization methods (adaptive rounding with linear feedback), which encompasses LDLQ, OPTQ, as well as nearest and stochastic rounding. Within this general class of methods, we show that LDLQ/OPTQ is worst and average-case optimal (Theorem 1).
One of the main insights of our paper is the benefit of what we call “incoherence processing”. This development greatly improves the performance of all rounding methods at 2 and 3 bits per weight, even nearest rounding.
**Outlier analysis.**
Intuitively, our incoherence processing reduces the prevalence of outliers. When the weight and proxy Hessian matrices are incoherent, the weights themselves and the directions in which it’s important to have good rounding accuracy are not too large in any coordinate. The empirical success of our method indicates the benefit of this outlier reduction.
Theoretically, Definition 1 gives a definition for incoherent weight matrices such that each entry is not too large. Under this incoherence assumption, we show in Lemma 3 that LDLQ can be superior to nearest and/or stochastic rounding, depending on worst or average case settings. Section 5.1 and Lemma 5 show how we can achieve incoherence in practice with high probability, using Kronecker products of random orthogonal matrices.
**Considering activation quantization.**
We agree that investigating activation quantization is an exciting direction, albeit out of scope for this paper.
**Evaluating inference speed.**
This is a valid point. We have developed a non-optimized implementation including a
2-bit CUDA kernel and performed initial inference timing results. Based on these results, the additional overhead of the incoherent matrix multiplies do not translate to a significant slowdown. We conducted sequence generation of length 512 on LLaMa-2-70b quantized to 2 bits, with and without our incoherence processing. Our observations so far indicate that loading the weights onto the GPU dominates the inference time, which is not changed by our method. Further optimizations such as tiling the incoherent matrix multiplies are expected to speed up inference. Section 4.1 also demonstrates why the incoherence processing operations are asymptotically non-dominant.
| | Inference time (s) |
| -- | ------------------------ |
| With incoherent matrix multiplies | 416.575 |
| Without incoherent matrix multiplies | 377.166 |
Experiments conducted on an NVIDIA RTX A6000 GPU. | Rebuttal 1:
Rebuttal: We thank the reviewers for their helpful feedback.
In summary, reviewers noted how the proposed method pushed the boundary of LLM quantization down to 2 bits, provided a novel theoretical understanding of adaptive layerwise rounding algorithms, and conducted extensive experiments. Previous quantization methods did not work at 2 bits, but our method gives decent and usable results at this quantization level. Our work indicates that quantizing to 3 bits with QuIP is the best use of a fixed bit budget, and paves the way towards making 2 bit inference viable.
The primary concerns were regarding:
**(1) Evaluation on additional models:** QuIP achieves good 2 bit quantization on LLaMa-2. We conducted preliminary experiments quantizing another model, LLaMa-2-70b, to 2 bits, and see that it achieves good zeroshot accuracy on Lambada and PiQA tasks.
**(2) Evaluation of inference speed:** Based on a non-optimized implementation, the additional operations from our incoherence processing translate to only a 10% increase in inference time. Additional optimizations are expected to speed up inference time. Section 4.1 also demonstrates why the incoherence processing operations are asymptotically non-dominant.
**(3) Evaluating the optimal bit quantization level for fixed bit budget:** An analysis of the experimental data from our submission shows that quantization with QuIP at 3 bits achieves the best tradeoff curve for total memory and model performance. QuIP at 3 bits makes use of a fixed memory budget better than OPTQ, and the full precision OPT models. This insight is consistent across all 7 of our language generation and zeroshot tasks, and on all OPT models up to 30b.
Full details can be found in the specific reviewer responses. We will include a comprehensive set of experiments and analysis in the paper revision.
**Figure:** QuIP at 3 bits (and 4 bits) achieves a better use of a fixed bits budget compared to OPTQ and full precision OPT models. Evaluated on 3 language generation tasks, 4 zeroshot tasks, and OPT models up to 30b parameters.
Pdf: /pdf/178a268a8cc070cb247d000d8059c55034c7a9ef.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
FaceComposer: A Unified Model for Versatile Facial Content Creation | Accept (poster) | Summary: The advancement of generative models has significant progress in automatic facial content creation. However, current models pose challenges due to their high customization and inefficiencies. To address this, a unified model called FaceComposer is proposed.
- The model leverages images, videos, multi-modal face datasets.
- Better performance than SOTA.
- The user-friendly interface of FaceComposer enables easy face generation, editing, and animation.
Strengths: - Well written text with nice figures and numerical results.
- Promising results supported by numerical results.
- Proposed method works in many styles
- Unified framework enables the proposed method to work on multiple tasks.
Weaknesses: - The interface is confusing.
- Maybe adding human eval can make the work better.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The framework is able to deal with various tasks by using different forms of datasets. I would like to know how much data and computation cost we need for each task in order to make the proposed method work.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - Excellent method supported by well designed experiments.
- Maybe adding human eval can make the work better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments!
$\textbf{1. Confusing interface.}$
We apologize for the confusion caused by the user interface. Here, we will provide a detailed explanation of the demo page in Figure A6. The five rectangular boxes in the first row represent five condition inputs: Mask, PNCC, Sketch, Identity Feature (IF) and T2F Embedding (T2F) from left to right. FaceComposer supports condition combinations, so you can check "nothing" in the boxes for the conditions you don't need, and drop image/video to the corresponding boxes or click that boxes to upload them for the conditions you want. For Mask, we will provide nine parsing choices to help you mask the uploaded image or video. For PNCC, one can drop an audio to the box, then the audio2PNCC module will be called to extract the PNCC sequence. For Sketch, we will automatically extract the sketch for the uploaded image or video. For IF, only reference image is needed, ArcFace model will be called to get the IF. For T2F, you can upload the reference image, where T2F is obtained from Face Clip model, or use the input box for "Prompt / Negative Prompt" below to make the T2F extracted from the text. The negative prompt will be put into $c_1$, and prompt is in $c_2$ (the explanation of $c_1$ and $c_2$ can be found in line 107~109 of the main paper). Below the input box of the prompt, we show five parameters: (1) Guidance Scale is the guidance weight we used in guidance directions; (2) Num samples mean how many samples you want to generate; (3) Seed is the random seed used in random process in FaceComposer, like noise generation; (4) Steps mean the number of DDIM steps in inference process; (5) Image resolution, just as its name implies. We hope these words clear your confusion and we will consider refine the interface in the final version to make it easy to follow.
$\textbf{2. Human eval.}$
Thanks for your suggestion. Table R2 and R3 demonstrate the user studies we add for face generation and face animation, we skip face editing task since its user study has been shown in Table 3 of the main paper. Participants are asked to score twenty images sampled from testset for each SOTA in Table R3 and ten videos in Table R2, where Accuracy and Realism keep the same meaning as we claim in line 195~196 of the main paper, LipSync is the sync degree between lip movements and speech content, OveralRealness tells which video is more natural and realistic, VideoQuality depicts the visual quality of generated video. It can be observed that FaceComposer achieves the best results in terms of both face generation and face animation.
$\textbf{3. Data and computation cost.}$
We apologize for the misunderstanding caused by our unclear description. We only have one training data: 1.1M images + 500 hours videos, and do not use different datasets for different tasks. We decompose this multi-modal database into five face-specific conditions to train a unified model. Then in the inference phase, we can make different combinations of these conditions to achieve different tasks, as we list in Table 1 of the main paper. We spend approximately four days training FaceComposer with eight NIVIDA A100 GPUs, based on a pre-trained Latent Diffusion Models.
---
Rebuttal Comment 1.1:
Comment: Thanks for the comments. I read them on day one. I will keep my original rating.
Nice work
---
Reply to Comment 1.1.1:
Comment: We really appreciate your constructive feedback to improve our work, thanks!
Best regards,
The Authors | Summary: The paper presents an all-in-one pipeline that can perform face generation, editing and animation and can be driven by multiple signals such as audio, text and sketches. The proposed model is based on a Latent Diffusion Model (LDM) and works by decomposing images faces into several representations capturing identity (ArcFace embeddings), geometry (PNCCs), face shape (sketch embeddings) and text description (T2F embeddings). These embeddings are then used as conditions and guidance for the LDM, which is taken pre-trained and fine-tuned on a large dataset containing both images and videos. The model is able to train simultaneously with images and videos thanks to temporal attention modules inserted in the U-Net architecture. The model is evaluated on face generation, editing and animation and compared to SoTA methods for each task. Facial animation is performed using an audio-to-PNCC model, which predicts the 50 expression parameters and 3 jaw pose parameters from Wav2Vec embeddings.
Strengths: The proposed framework is indeed flexible and can perform tasks that would normally require multiple modules. The advantage of a unified model apart from the simplicity is the computational efficiency (i.e. low latency), since it can perform several steps in parallel without stringing together multiple models.
The authors have collected a curated, extensive dataset of images and videos. They make sure that the dataset only contains high quality frames and highly correlated text-image pairs. The dataset will be made public and will undoubtedly be very useful for researchers in the field.
The authors include an ablation study showing the effect of training on only videos or only images and share their hypotheses into why the addition of video data results in slightly blurrier frames.
The lip movements appear to be synchronised with the audio and this is also visible in the SyncNet and LMD metrics.
Weaknesses: The proposed framework does not present significant methodological novelty and it is not clear what the key contributions are besides the collection of multiple conditions and pretrained models to drive a unified framework. Temporal attention has been used before in [*] and [**] and the conditioning mechanism and guidance used in the paper are standard practice for most diffusion models. Make-a-Video [*] also is trained on both image and video data and has a similar approach but is missing from the references. The audio-to-PNCC network is based on StyleTalk but is only described in the supplementary material.
From the supplementary video the performance of the model seems to degrade a lot when the masks are not used (identity feature + PNCCs). This setting is much more challenging and the proposed model seems to struggle to preserve the identity or produce natural head motion. Furthermore the visual quality also seems to degrade. The authors also do not mention if the results of Table 4 are measured when using masks or not. If masks are used then this is an unfair comparison against other methods such as PC-AVS that are truly one-shot and do not simply in-paint the mouth. This would also explain why the SSIM and CPBD are better.
The authors only have an ablation study examining how the use of images and videos during training affects the performance of the model. They have not performed ablation studies to explore how the performance improves with the addition of each condition.
The description of the user interface of the tool is not of particular interest for machine learning research. The authors should consider removing this section from the main paper and using the space to add experiments such as ablation studies or to describe some of the components in more detail.
[*] Singer et.al. "Make-A-Video: Text-to-Video Generation without Text-Video Data"
[**] Ho et. al. "Video Diffusion Models"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The initial clips in the supplementary material seem to be of a much higher quality than the ones that follow. The authors should clarify if the masks are used for the initial videos. This is will help to put the performance of the model in perspective and determine if it should be judged as a video dubbing or a one-shot speech-driven facial animation method.
How does the temporal attention used in this paper differ from that used in Make-A-Video[*] or Video Diffusion Models[**]
[*] Singer et.al. "Make-A-Video: Text-to-Video Generation without Text-Video Data"
[**] Ho et. al. "Video Diffusion Models"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: The authors discuss the limitations of the method in the supplementary material. They also point out some negative applications of methods such as the one proposed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments!
$\textbf{1. Novelty and contributions.}$
We would like to clarify our primary contribution, which is the unified generative framework with various means of controllability for versatile facial content creation. Our design enjoys some merits in both training and inference stages. For training, we only need to train and save one model for most tasks, reducing memory and computation cost. For inference, multi-condition-driven scheme enables us to accomplish combined tasks in one-time forward, improving inference flexibility and efficiency. We also collect a high-quality multi-modal face database to ensure the success of FaceComposer, and we hope such a database would be beneficial to further studies in this field. Besides, thanks for the reminder of the missing references: Make-A-Video and Video Diffusion Models. We will add more discussion in the revision.
$\textbf{2. Mask influence.}$
Firstly, we agree with you that the setting, i.e. Identity Feature (IF) + PNCCs, is challenging since IF only indicates the identity attribute and PNCCs only represent facial geometric information, both of which do not contain detailed information, like texture, skin or hair color.
Secondly, FaceComposer in Table 4 and initial clips in supplementary material have mask, and we show the mask shape in Figure R1. We agree with you that there may exist an unfair comparison between video dubbing (like Facecomposer, Wav2Lip) and one-shot talking head (like PC-AVS), although this setting is very **common** in many talking head practices [1][2][3]. Considering that both categories of methods need to generate the mouth, we add evaluation metrics only for the mouth region (SSIM-M and CPBD-M, besides M-LMD) to reduce the influence of mask, as shown in Table R1. It can be seen our SSIM-M and CPBD-M are still the best, proving the effectiveness of FaceComposer in non-mask areas.
Thirdly, we want to claim the mask is not the key to the generation quality of FaceComposer, and we justify it by given more videos (please see Rebuttal_Video.mp4 submitted to AC), including results of IF + PNCCs and other condition combinations without mask. The high generation quality could be attributed to the foundation generative model (LDM) and the high-quality training data.
[1] A lip sync expert is all you need for speech to lip generation in the wild, ACMMM 2020
[2] Masked Lip-Sync Prediction by Audio-Visual Contextual Exploitation in Transformers, SIGGRAPH Asia 2022
[3] StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator, CVPR 2023
$\textbf{3. Ablation studies of condition impact.}$
Following your suggestion, we conduct ablation studies in Table R4 (upper half) to explore the condition impact. We want to explain the performance is stable, not improved, with the addition of each condition, because we only increased the number of conditions decomposed from the training data, not the training data itself. Taking the face generation in Table R4 (upper half) as example, adding extra conditions (e.g. sketch, IF) into baseline will not bring additional benefits, since they only use the T2F Embedding as condition in the test phase. In contrast, increasing the size of dataset will introduce new data information, which can improve the performance as we show in Table R4 (bottom half).
$\textbf{4. User interface.}$
Thanks for the suggestion. We will remove the user interface to Supplementary Material to save more spaces for experiments.
$\textbf{5. Difference of temporal attention.}$
The temporal attention in FaceComposer has no difference with that in Make-A-Video/Video Diffusion Models. Describing this module is to make our paper self-contained. We will add the missing references to avoid misunderstanding.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanations and additional experiments. I believe that they make the paper clearer for the reader and answered many of my questions. I still believe that the proposed method falls short in facial animation and that the comparison with one-shot approaches is not fair. Since the model is flexible enough to perform one-shot facial animation I do not see the reason why a fair comparison can't be made in this case (i.e., use the masked video version to compare to dubbing approaches and the one-shot version for the others). The only thing that can be directly compared between dubbing and one-shot approaches is mouth movement accuracy. Unfortunately, 2 of the 3 metrics added do not reflect the accuracy of the mouth movement. CPBD and SSIM do not capture mouth movement accuracy and LMD is dependent on mouth position so it may penalise methods capable of generating novel emotions or head movements. SyncNet score is the best metric for mouth accuracy but the proposed approach does not outperform Wav2Lip.
I have also noticed something strange in the reported LMD of Wav2Lip. The F-LMD seems to be higher than M-LMD. This would be understandable for one shot methods but for Wav2Lip (which only inpaints the mouth) I would expect F-LMD to be lower than M-LMD (since landmarks of the rest of the face will match with those of the ground truth).
---
Reply to Comment 1.1.1:
Comment: Thanks for your insightful comments.
$\textbf{1. Why FaceComposer uses masks in face animation evaluations}$
Firstly, we would like to explain why it's unreasonable to make FaceComposer in one-shot version compete against one-shot talking head methods. In the one-shot setting, the one-shot image can be input into one of the four conditions (i.e. Mask, Sketch, IF and T2F. PNCCs is used for facial motions). When using one-shot image for Mask, no region should be masked to make it the same as one-shot talking head methods. But this use of FaceComposer does not match its training scheme, where we mask one/all of nine face parsing areas and recover the masked region based on other conditions. Worse, the facial motion in unmasked image may conflict with that in PNCCs, resulting in undesired face motion generation. As for the other three conditions (Sketch cannot capture the textual information in one-shot image, IF can only capture identity attribute without any pixel-level information, T2F is only able to capture global information), the illumination or speaker appearance of the generated videos may differ from the input images, and all of them are not designed for one-shot talking head scenario.
Secondly, to further validate the effectiveness of our method, we add a comparison with StyleSync [1], a recently published visual dubbing method whose generation setting is the same as that of FaceComposer (in-painting masked mouth area). As shown in the table below, our method achieves better performance in all metrics, indicating the superiority of FaceComposer.
$\textbf{2. Fair comparisons with prior arts.}$
It is very insightful for the reviewer to consider how to conduct a fair comparison between visual dubbing methods and one-shot methods, which is neglected in prior arts. We may attribute the unfairness issue to two aspects: (1) it is unfair to compare the quality in the non-mouth area, as visual dubbing methods "copy" the area from the input images while one-shot methods generate the area by themselves. (2) the pose of the generated speaker in one-shot methods may change, making it fall short in metrics where ground truth (GT) is used as reference.
For (1), CPBD-M, SSIM-M, and M-LMD all focus on the video quality of the mouth region. For (2), if the pose of visual dubbing and one-shot methods are both aligned with GT, we argue that, in addition to mouth movement accuracy, the video sharpness and appearance consistency (whether the appearance is consistent with that in the input image) of the mouth area should also be evaluated quantitatively, as the quality of these two properties is crucial to video realness. To get closer to this goal, the samples we generated in Table R1/R2 use the head pose and neutral emotion of GT for pose/emotion-controllable methods, such as PC-AVS, StyleTalk, and EAMM. (We acknowledge for methods that can not control poses, unfairness may still exist.) Besides, CPBD-M is a no-reference metric and SSIM-M evaluates structural similarity but not pixel-level one, both can mitigate the effects of speaker pose changes. | Summary: This paper presents a facial content generation framework named FaceComposer, which is based on a Latent Diffusion Model (LDM). The primary aim of this framework is to facilitate text-conditioned face synthesis/editing and animation. The conditions employed in this model encompass a variety of aspects, including mask, PNCC (Projected Normalized Coordinate Code), sketch, identity feature, and Text2Face embedding. To ensure the generation of dynamic content, the authors introduce a temporal self-attention module within the LDM during the training phase. The experimental results affirm the superiority of this framework, showcasing enhanced synthesis quality in both static and dynamic settings. This novel approach thus provides a robust solution to the challenges of face synthesis and animation.
Strengths: 1. The concept underlying this paper is both straightforward and efficacious, offering a user-friendly yet potent solution to the problems at hand.
2. The application of the Projected Normalized Coordinate Code (PNCC) as a condition in the diffusion model, particularly in the context of face animation, presents a unique and stimulating approach to this field of study.
3. The collection of a new dataset comprising more than 500 hours of talking face videos is an important contribution. This sizable dataset is likely to have significant utility in the further exploration and development of this area.
Weaknesses: 1. Comparative Methods: Including the results from StyleTalk [20] in the comparison would be beneficial given its shared design principle of audio2PNCC for talking face generation. This comparison could provide a more comprehensive overview of how the proposed method performs against closely related approaches.
2. Dataset Details: As Table 4 indicates, the release model of all comparison methods isn't trained on HDTF. This makes it difficult to ascertain whether the performance gap results from the FaceComposer itself or the additional training data sourced from Youtube, BBC, and so on. Conducting more ablation studies on the influence of the dataset would provide more clarity in this regard.
3. Missing References: It appears that there are some relevant references missing from the current list.
[a] DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion
[b] Pretraining is all you need for image-to-image translation
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. As highlighted in the perceived weaknesses, the comparison settings for the different methods do not appear to be identical. To provide a fair and comprehensive evaluation, it would be beneficial to include some ablation studies to investigate the effect of dataset scale on generation quality. This additional layer of analysis would contribute significantly to the robustness and validity of the study's findings.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Certainly, the paper could be further enriched by discussing the social implications of the collected dataset. The acquisition of such an extensive dataset, with over 500 hours of talking face videos, has broad implications that could be relevant to many fields. It would be better to include some social impact discussion on it.
Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments!
$\textbf{1. Comparative methods.}$
Thanks. Table R1 displays the comparison results between FaceComposer and StyleTalk. It can be observed FaceComposer has slightly better performance than StyleTalk, stemming from two aspects: 1) FaceComposer employ FLAME to represent the geometric information of the face, which is more dense and expressive than the BFM used by StyleTalk [1]; 2) As the foundation model of FaceComposer, Latent Diffusion Models has greater potential in generation than the PIRenderer of StyleTalk.
$\textbf{2. Dataset details and dataset scale.}$
Firstly, considering FaceComposer may benefit from the same distribution between testing and training sets (both from HDTF) in Table 4, while other methods are not trained on HDTF, we add a test set MEAD-Neutral to remove the performance bias introduced in the testing stage, as shown in Table R1.
Secondly, we agree with you that including some ablation studies of dataset scale will provide a comprehensive evaluation, so we verify the effect of dataset scale on face animation task in Table R4 (bottom half), other tasks are on the progress. We can observe that: 1) FaceComposer- is still better than other SOTA methods, proving the superiority of FaceComposer itself; 2) FaceComposer performs better than FaceComposer-, demonstrating that the large-scale dataset is essential for our FaceComposer, so we collect a large-scale high-quality training data to support different facial content creation tasks.
$\textbf{3. Missing references.}$
Thanks. We will add and discuss them in the final version.
$\textbf{4. Impact of the collected dataset.}$
Thanks. Our dataset will facilitate numerous research areas. The large data volume and high video quality of our dataset are well suited for various image/video generation tasks, such as unconditional face generation, face reenactment, and face swapping. Our dataset also has potential for 3D applications, such as 3D face generation. As a high-quality audio-visual dataset, our dataset facilitates the research in audio-visual speech recognition, speech separation, and audio-driven face animation.
Our dataset can be used for deepfakes, which will have a negative effect. However, our dataset can also be leveraged in forgery detection tasks to prevent such concerns. We filter out harmful content from our dataset to prevent it from being used for malicious purposes. We will do our utmost to regulate the application and acquisition of our dataset to avoid potential misuse.
We will include the discussion in the final version. Thanks.
[1] Learning a model of facial shape and expression from 4D scans, SIGGRAPH Asia 2017
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for your response.
@Reviewer BQjN: Does the rebuttal fully address your concerns?
Best regards,
Your AC | Summary: The paper proposes a unified framework for facial generative models that allows text/spatial/audio condition facial editing tasks. The results are reasonable and comparable to prior works. Based on the stable diffusion prior, this model can generalize well to different style domains.
Strengths: - The paper presents a unified framework that allows different face editing settings including face stylization, audio-driven animation, attribution editing, etc. The simultaneous style transfer and facial animation provides a one-stage solution to stylized facial animation, which avoids error accumulation and computation waste. More importantly, based on the SD prior, this model can generalize arbitrary style in the wild, such as anime, oil painting etc.
- The technical contribution is moderate as most of the modules come from prior works. However, the whole system aims at addressing several face generative tasks in one model which is beneficial for many applications.
- The paper organization and writing are easy to follow.
Weaknesses: - The paper would benefit from an analysis of the pros and cons of jointly training with different conditions using text and experiment results. This is one key to support the effectiveness of the unified framework design.
- Some face generation+editing results in Figure A5 look odd. For example, the mask inpainted face has a different skin color compared to the neck in (a).
- While MakeItTalk, Wav2Lip and PC-AVS are not SOTA methods of face animation, recent related works such as styleTalk, styletalk, AVCT etc. should be discussed and compared if available.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper would benefit from a discussion and comparison with recent related works. This would help address my concerns about missing comparisons and discussions with recent related works.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The social impact has been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments!
$\textbf{1. Pros and cons of different conditions.}$
It is agreed that the analysis of the pros and cons of jointly training with different conditions is important for our unified framework design, we list them below and will add them in the final version.
$\textbf{Pros:}$ Training with multi-conditions, FaceComposer can support different tasks (e.g. face generation, face editing and face animation) and enjoy diverse controllabilities (e.g. accomplishing combined tasks among face creating, editing and animating with one-time forward) **with a unified model**. And Table R4 (upper half) shows that the performance of FaceComposer keeps stable when changing the number of conditions on a fixed dataset.
$\textbf{Cons:}$ Due to the general design of FaceComposer for a variety of tasks, we need more training data to make different tasks perform well, which inevitably increases the cost of training. With a limited scale of training data, the performance of FaceComposer may degrade on some tasks, as illustrated in Table R4 (bottom half). We thus collect a high-quality large-scale multi-modal face database to alleviate such a problem.
$\textbf{2. Problems in Figure A5.}$
Thank you for pointing it out. In Figure A5(a), FaceComposer needs to fill the mask region in target face with the Identity Feature from source face. We admit this is a challenging setting since the Identity Feature only provides identity attribute, without facial skin or texture information. The skin color mismatch issue can be alleviated by masking both face and neck, and the visualization of this task could be affected by the difference between source and target face, such as age, gender. We show more results in Figure R2.
$\textbf{3. More SOTA of face animation.}$
Thanks for your reminder, we add more test sets, metrics and SOTA methods in Table R1. As we can see, FaceComposer achieves the best results, demonstrating the effectiveness of our design on face animation. StyleTalk is second to ours, because we share the same audio2PNCC design, but the representations of PNCC and generator of StyleTalk are inferior to FaceComposer. SadTalker adopts similar 3D motion coefficients regression, performing closely to StyleTalk. AVCT and EAMM are worse than the former three methods, since AVCT trains the model on one-identity dataset and the condition design of EAMM can not make full use of multi-identity dataset.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for your response.
@Reviewer tZ4V: Does the rebuttal fully address your concerns?
Best regards,
Your AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the rebuttal. I have read other reviews and authors' feedback. The rebuttal has addressed all my concerns. More analysis and experiments have been presented. I would keep my initial rating.
---
Reply to Comment 1.2.1:
Comment: Thanks for your feedback and positive comments. We will improve our final version accordingly.
The Authors | Rebuttal 1:
Rebuttal: To all reviewers:
We thank all reviewers for their efforts in reviewing our paper and appreciate their valuable comments. We will address their individual concerns in the rebuttals per review. Here, we list some concerns in common. If not specified, Table(Figure) \*/A\*/R\* represent the corresponding table(figure) in main paper/Supplementary Material/pdf file in the rebuttal, respectively.
$\textbf{1. More experiments of face animation in Table 4.}$
For face animation, more experimental comparisons are shown in Table R1. (1) We add four SOTA methods, including StyleTalk, AVCT[1], EAMM[2] and SadTalker[3]. It can be seen FaceComposer outperforms all of them. Note that despite sharing the design principle of audio2PNCC, FaceComposer is superior to StyleTalk, since the FLAME we use to represent PNCC is more expressive than the BFM that StyleTalk adopts [4], and the Latent Diffusion Models in FaceComposer has more potential in terms of generation than the PIRenderer in StyleTalk. (2) Considering the audio2PNCC of FaceComposer is trained on HDTF, we add another testset (MEAD-Neutral, a subset from MEAD with only neutral expression) to demonstrate our superiority across different data distributions. (3) Since some methods use mask (e.g. FaceComposer, Wav2Lip) and some use one-shot (e.g. PC-AVS, MakeItTalk, StyleTalk), in order to reach a fair comparison, we add two new metrics (SSIM-M and CPBD-M) besides M-LMD, to only evaluate the mouth region quality. SSIM-M and CPBD-M stand for SSIM and CPBD calculated in the mouth area, respectively.
$\textbf{2. Ablation study of condition numbers.}$
We add the ablation studies in Table R4 (upper half) to investigate the effect of different numbers of conditions on FaceComposer. Considering face generation/editing/animation are the basic tasks, we take FaceComposer with three conditions (T2F Embedding, Mask, PNCC) as baseline, "baseline + Sketch" means baseline with Sketch condition, "baseline + Sketch + Identity Feature" is equal to FaceComposer with all five conditions. It can be seen that Facecomposers with different number of conditions keep stable performance, no matter in face generation, editing or animation task. We argue this is reasonable, bacause the training dataset is fixed, when the number of conditions increases, no additional information is introduced for a specific task. Note that adding conditions does not increase the training set, but increases the number of conditions that are decomposed from the training set.
$\textbf{3. Ablation study of dataset scale.}$
To demonstrate the impact of dataset scale on generation quality, we show an ablation study on face animation in Table R4 (bottom half). More experiments on face generation/editing are currently being conducted and will be included in the final version. Considering the SOTA methods generally have dozens of hours of training data, we reduce the training data of FaceComposer to the similar scale for a fair comparison. Specifically, we randomly sampled 10 hours of video and 4.5W images from our original dataset to train a FaceComposer (denoted as FaceComposer-). From Table 4 (bottom half), it can be observed that FaceComposer- is inferior to FaceComposer due to the decrease of data information, but it is still better than other SOTA. Besides the performance gap caused by small dataset, we would like to clarify FaceComposer is a unified generative model that requires a large-scale training data to benefit a variety of facial content creation tasks. Therefore, we collect a large-scale multi-modal face database and will make it public under the constraints of ethical review.
[1] One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning, AAAI 2022
[2] EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model, SIGGRAPH 2022
[3] SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation, CVPR 2023
[4] Learning a model of facial shape and expression from 4D scans, SIGGRAPH Asia 2017
Pdf: /pdf/172ebaea5e46564135de59ede966c99fbe2d5bf3.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Calibrating “Cheap Signals” in Peer Review without a Prior | Accept (poster) | Summary: The paper tackles a problem in peer review where reviewers may provide noisy/biased ratings for papers. The paper investigate a "one-shot" scoring process inspired by the "Surprisingly Popular" method, that can rank papers by their true quality without any prior knowledge, even if different papers have different noise levels and biases.
Their method relies on eliciting reviewers’ predictions for a random reviewer’s signal and using this for calibration. Specifically, their model assumes that there is a prior distribution over the states (where each paper's state in the "clean" setting is the distribution of reviewers’ signals) and that reviewers are perfect Bayesians who can update their beliefs based on their private signals. In the "noisy" setting, reviewers can only observe noisy versions of their signals, determined by a random mapping that depends on the noise level and the bias vector.
Then, the paper defines the true quality of a paper as the expected score in the clean setting, and aims to design a method that can rank papers by their true quality with high probability, even if they have different noises/biases. For this, they introduce a surprisal vector that measures how "surprising" a paper’s state is compared to the prior distribution, and is normalized by the correlation between reviewers’ signals. The key result is that the surprisal vector is noise-invariant, i.e., the surprisal vector is the same in the clean and the noisy setting, regardless of the noise model. Then, by computing an empirical surprise-based score for each paper, the mechanism papers can be compared in a noise-invariant way and are consistent with their true qualities.
Strengths: - The paper is tackling an important problem, which is the question of how to de-bias and de-noise reviewers to obtain a clearer view of paper quality.
- The observation that the surprisal vector is invariant under noise is insightful and interesting, as was the idea to look to the surprisingly popular mechanism.
Weaknesses: - The presentation of the paper could be improved. The intro feels a bit unfocused, there are a number of grammatical errors, and the paper is a bit notation heavy. It might help to move the notation table from the appendix to the main body, and also expand on the "high level ideas" to more provide a clearer overview of the proposed method. It took a couple reads to understand exactly what was being proposed.
- The assumption of calibrated agent predictions is not clearly justifiable in a context where agent signals are assumed to be potentially biased.
- In line 162, the result cited by Kong et al. (2018) is outdated, suggest updating with more recent results that improve on this:
Schoenebeck, Grant, and Fang-Yi Yu. "Two strongly truthful mechanisms for three heterogeneous agents answering one question." ACM Transactions on Economics and Computation 10.4 (2023): 1-26.
Srinivasan, Siddarth, and Jamie Morgenstern. "Auctions and prediction markets for scientific peer review." arXiv preprint arXiv:2109.00923 (2021).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - How do you handle det(U) being negative in practice?
- Can you give some intuition for why is the normalization of the surprisal vector important?
- Do the exponential bounds in the general case not depend on the cardinality of the signal space? Can you give some intuition for why this is the case?
- Are the results robust if noise in agents' predictions of other agents' signals correlated with the noise in their own signals?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The main limitation I see is that it is unclear what to do if some of the computed statistics (like det(U)) violate the non-negativity requirement.
The authors are up front about other limitations. These aren't ideal, but may be OK for the goals of this paper.
- Authors assume no incentive issues, i.e., that all agents report truthfully. However, this is unlikely to be the case in practice, especially if participants know that there is
- The authors assume that agents are calibrated in their predictions, even if their signals may be biased. This does feel like a bit of an odd assumption, and is unlikely to be true. Why would signals be biased, but predictions of others' signals be calibrated?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer VrqZ
Thank you for your insightful comments and suggestions.
### W1: The presentation of the paper could be improved
In the final version, we will relocate the symbol clarifications (Table 1, currently in the appendix on page 17) back to section 2 for ease of reference. Besides, we will provide a more detailed and clear overview in "high level ideas". Thank you for valuable suggestions.
### W3: Outdated result cited by Kong et al. (2018)
Thanks for pointing out, we will update these references in the final version. In addition, we will make the related work section more detailed in the final version.
### Q1: How do you handle det(U) being negative in practice
In our model, when reviewers are Bayesian, det(U) will never become negative. In practice, a negative det(U) suggests that when a reviewer favors a paper, her belief in the likelihood of another reviewer also favoring it decreases. This unusual scenario signals to the Program Committee that this particular paper warrants further discussion and careful consideration before making a final decision.
### Q2: Can you give some intuition for why is the normalization of the surprisal vector important
Recall that in the binary case, our score is defined as (baseline score - prior expected scores) / correlation.
First, the normalization constant adaptively compensates for varying noise levels. When faced with noisy reviewers providing weak signals beyond the prior, the baseline and prior expected scores will closely align, resulting in a correlation near zero. By normalizing using this small correlation, the overall scores are scaled up to compensate for the strong noise and weak signals. Conversely, when noise is weak and the reviewers' signals are strong, there is a clear distinction between the baseline and prior expected scores, leading to a higher correlation. This adaptive normalization ensures the metric's efficacy across contexts with varying noise levels.
Second, only with the normalization, we can have the invariance results. The core idea is that a metric can only enable quantitative comparisons across different papers if and only if it is invariant to systematically biased noise. Without invariance, biased noise can easily distort comparisons for certain papers. For instance, with one noise pattern, paper A outscore paper B, while with a different biased noise pattern, paper B outscore paper A.
### Q3: Do the exponential bounds in the general case not depend on the cardinality of the signal space? Can you give some intuition for why this is the case
In general cases, there exists a "natural" mapping $\varphi$ that allocates a real-valued score to each potential signal. For example, in NeurIPS, this mapping is represented as $r \rightarrow 3, br \rightarrow 4, ba \rightarrow 5, wa \rightarrow 6, a \rightarrow 7$. Such a "natural" mapping is prevalent in most peer review conferences. By employing this mapping, we can apply the concentration bound on the average mapping outcome, which is expressed solely in terms of the value range $\varphi_{\text{max}}-\varphi_{\text{min}}$.
### Q4: Are the results robust if noise in agents' predictions of other agents' signals correlated with the noise in their own signals
Our assumption in the paper is that the reviewers' biases towards a paper consistently reflect in their predictions about others. If your question is, "Is the result robust if this correlation is not perfect Bayesian?", then the response would be that our score loses its theoretical guarantee in such cases. Nonetheless, our scores of the papers still provide meaningful insights for the Program Committee, particularly when assessing papers of borderline quality.
### L1: Authors assume no incentive issues
The research question this paper attempts to address is "how to aggregate the biased evaluations in peer review, assuming evaluations are truthful". This research question is parallel and complementary to the research question "how to design incentive mechanisms to obtain truthful evaluations". We can employ mechanisms aimed at eliciting truthful evaluations (for example, the papers you mentioned) to ensure truthfulness, and then use our proposed method for aggregation. We will clarify this in the discussion section in the final version.
### L2: Why would signals be biased, but predictions of others' signals be calibrated
In our paper, we interpret the scenario where a reviewer provides biased evaluation as she may not exert sufficient effort and only have access to "cheap signals". However, she can still report calibrated predictions concerning other reviewers’ evaluation, based on the signal she has. This scenario mirrors the Surprisingly Popular (SP) method [1]. In SP, it is assumed that while agents might give a biased response to a question like "what is the capital of Illinois", they can still provide a Bayesian prediction regarding other people’s answers based on their own response.
[1] Prelec, Dražen, H. Sebastian Seung, and John McCoy. "A solution to the single-question crowd wisdom problem." *Nature* 541.7638 (2017): 532-535.
---
Rebuttal Comment 1.1:
Comment: I thank the author for their clarifications, and I have no further questions. | Summary: This paper proposes a method to calculate peer-reviews scores for papers in the presence of systematically biased noise, such that the score of a paper with a higher expected score in a noise-free regime is higher than the score of a paper with a lower expected score in a noise-free regime with high probability (as the number of peer reviewers grows large). The method does not use reviewers' historical scores to compute their priors. Instead, the method asks reviewers to predict the scores of other reviewers (similar to the Bayesian truth serum). Numerical experiments confirm the theoretical results that the proposed score is better able to distinguish between "good" and "bad" papers under systematically biased noise compared to a baseline that just averages the uncalibrated peer review scores.
**Post-rebuttal:** I am raising my presentation score to 4 (excellent).
Strengths: The paper is clear and precise, and spends sufficient time setting up the problem and building intuition. The proposed method adapts a well-known idea (Bayesian truth serum) to the setting of systematically-biased noise in peer review in a principled manner; this is novel to the best of my knowledge. It is also non-trivial, and provides a useful way to construct peer-review scores without needing historical data for each peer reviewer.
Weaknesses: The generality of the exposition also makes it a bit difficult to follow. I think just limiting the main body of the paper to the binary decision case (accept, reject) and adjusting the notation for this (eg. a for accept, r for reject) would make it easier to follow the main argument of the paper. The generalization to multi-valued decisions (eg. review scores) could be relegated to an appendix or to just one theorem.
The numerical experiments do not consider SP as a baseline (possible with some modifications since it cannot be directly applied). A naive application of SP would be the most related method in prior work for the problem considered in this paper. Hence, it would help to see how this would perform.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Q. Could the authors report numerical results from a naive application of SP as a baseline?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have noted the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer tv7v
Thank you for your insightful comments and suggestions.
### W1: Suggestion for limiting the main body of the paper to the binary decision case:
Thank you for your suggestion. We will adopt it in the final version. This allows us to focus the main body on the binary signal case, making it easier for readers to follow the paper's flow and understand the core concepts.
### Q1: Could the authors report numerical results from a naive application of SP as a baseline
The Surprisingly Popular (SP) method can be applied in our setting [1]. The SP method calculates the "prediction-normalized vote" for all signals $i$ as $\frac{w_i}{q_i}$ and asserts that the signal $\arg\max_i \frac{w_i}{q_i}$ is the best signal. Thus, when signals are binary ('reject': 0, 'accept': 1), the SP score of a paper is given by the "prediction-normalized vote" of 'accept' minus that of 'reject', i.e., $\frac{w_1}{q_1}-\frac{w_0}{q_0}$. However, the SP score is not invariant to noise. In some instances, even with infinite reviewers, the SP score fails to determine the relative quality of two papers (a specific example is provided below).
[1] Prelec, Dražen, H. Sebastian Seung, and John McCoy. "A solution to the single-question crowd wisdom problem." *Nature* 541.7638 (2017): 532-535.
**An example that the SP score fails to work**
* Signals are binary ('reject'/'accept').
* The prior over the states is $w_1\sim \text{Uniform}(0,1)$ (paper’s quality is distributed uniformly).
* Paper A has true state $\mathbf{w}_A=[1/3,2/3]$, i.e., paper A has true quality $2/3$.
* Paper A receives no noise, i.e., the noise matrix of paper A is $\mathbf{M}_A=\left[\begin{matrix}1 & 0\\\\0 & 1\\\\\\end{matrix}\right]$.
* Paper B has true state $\mathbf{w}_B=[1/4,3/4]$, i.e., paper B has true quality $3/4$, which is better than paper A.
* Paper B receives positive noise because its writing is good. The noise matrix of paper B is $\mathbf{M}_B=\left[\begin{matrix}1/3 & 2/3\\\\0 & 1\\\\\\end{matrix}\right]$.
For paper A, we can calculate that $\hat{\mathbf{w}}_A=[1/3,2/3]\, \hat{\mathbf{q}}_A=[1/2,1/2]$. Thus the SP score of paper A is $\frac{2/3}{1/2}-\frac{1/3}{1/2}=\frac{2}{3}$.
For paper B, we can calculate that $\hat{\mathbf{w}}_B=[1/12,11/12]\, \hat{\mathbf{q}}_B=[1/6,5/6]$. Thus the SP score of paper B is $\frac{11/12}{5/6}-\frac{1/12}{1/6}=\frac{3}{5}$.
As a result, paper B has a better true quality than paper A, but has a lower SP score, even if there are an infinite number of reviewers.
---
Rebuttal Comment 1.1:
Comment: Thanks! This is very helpful. Why not add SP to Figures 4 and 5?
---
Reply to Comment 1.1.1:
Comment: ### Using SP for comparison
Thank you for your feedback!
The method proposed by Prelec et al. (2017) was designed for aggregating multiple reports into a single decision, rather than generating scores for comparing two alternatives. Specifically, when signals are binary, their approach outputs 1 if $\frac{w_1}{q_1}>\frac{w_0}{q_0}$, and 0 otherwise. While effective for combining evaluations into a single decision, this does not directly produce scores that can be used to compare the relative quality of two papers or alternatives.
To enable such comparisons, we propose a new score based on the surprisingly popular idea: $\frac{w_1}{q_1}-\frac{w_0}{q_0}$: this measures the amount of surprisal for 'accept', subtracting the amount of surprisal for 'reject'. We call it the SP-inspired score.
We run the same numerical experiments as Figure 4 and 5. Since there is no place to attach figures, we describe the simulation results in words.
* SP-inspired score and the surprisal based score have similar performance in the settings considered by Figures 4 and 5 when the number of reviewers is small like $n=3,5$, and the surprisal based score outperforms the SP-inspired score as the number of reviewers increase. Both of them are better than the simple average.
Conceptually, the example we provided before demonstrates that the SP-inspired score does not calibrate based on the amount of noise. Without calibrating for noise amount, the SP-inspired score will introduce substantial bias against papers with high-quality but more noisy evaluations. Because of the lack of noise calibration, the error rate of the SP-inspired score does not converge to zero as the number of reviewers increases. This makes it challenging to provide theoretical guarantees on the SP-inspired score's performance across all cases. In contrast, the surprisal-based score does calibrate, allowing its error rate to decrease with more reviewers, backed by theoretical guarantees.
However, without calibration, while suffering from bias, the SP-inspired score has a lower variance. When reviewer numbers are small, the SP-inspired score achieves similar average performance to the surprisal-based score.
In the final version, we will include the SP-inspired score and expanded comparison results to further illustrate the strengths and limitations of both methods. | Summary: The paper considers the problem of comparing two papers based on noisy ratings, where the noise can be arbitrarily biased for different papers. The paper elicits from each reviewer both a rating and a distribution of predicted ratings from other reviewers’ (based on a Bayesian update of the common prior, known to the reviewers). The paper proposes a scoring method that recovers the correct comparison between papers from the noisy reports, which essentially corrects the reported ratings using the reported prediction distributions. The authors theoretically show a bound on the error probability of a comparison based on their scoring method, and experimentally evaluate the comparison accuracy.
Strengths: - The problem considered by the paper, calibrating biased noise in a one-shot comparison between items, is conceptually interesting. The authors motivate this problem well in the peer review setting with examples of cases where multiple reviewers may have the same bias.
- The authors prove a theoretical bound on the convergence rate of their method’s error probability to 0. The experimental settings considered are thorough (and demonstrate aspects of the theoretical result).
Weaknesses: - While the problem itself is interesting, the assumption that all agents share and know a common noisy prior is very strong; e.g., in the peer review examples in the introduction, while the noise may be biased in the same direction for all reviewers, the noisy prior may not even be similar for all reviewers. Given this assumption, I’m unsure about the significance of the proposed method, which essentially recovers this prior (as in past work) and uses it to standardize the reports.
- I found the clarity of the writing to be generally poor, leading to some significant confusion at points (particularly in Sections 1 and 2). For example, in Section 1, I did not understand the claim following Example 2 (that a higher noise level would result in a lower expected score). In Section 2, it was difficult to follow which aspects are observable by agents and by the mechanism.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: - Could the authors expand on what makes the setting of this work unique as compared to the settings of the past works referenced in Section 1.1 (Lines 123-129)? Are none of these works applicable as a baseline for comparison?
- The invariant surprisal vector concept is given a lot of focus (Section 3), when the purpose of this concept seems to only be that it’s used to prove Corollary 1. Since the surprisal-based score (in Section 4) is the main contribution, I might suggest making it more clear in Section 3 what the purpose of the invariant surprisal vector is, in order to keep the focus on the main contributions.
- Could the authors clarify the aspects of Section 1 and 2 referenced above?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors adequately address the limitations of the assumptions made (perfect Bayesian and identical reviewers).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer 6t8A
Thank you for your insightful comments and suggestions.
### W1: Assumption of common noisy prior is very strong
We interpret the scenario where different reviewers have different priors as if they share a common prior but possess distinct private information. The "common prior assumption" is a foundational hypothesis in economics [1]. Works that employ this assumption model agents' differences as "information asymmetry" rather than "prior asymmetry". In other words, they assume that differences among agents arise from variations in their private information, rather than differences in their prior beliefs [2].
[1] Morris, Stephen. "The common prior assumption in economic theory." *Economics & Philosophy* 11.2 (1995): 227-253.
[2] Aumann, Robert J. "Agreeing to Disagree." *The Annals of Statistics* (1976): 1236-1239.
### Q1: What makes the setting of this work unique as compared to the settings of the past works
In Section 1.1, we classify related work into two primary categories: "bias in peer review" and "reducing bias via second-order information".
Regarding the "bias in peer review" category, our objectives align with the broader goal of mitigating bias in the peer review process. However, the main distinction between ours and previous works is that our method is parameter-free and operates without historical data. In contrast, previous studies typically need to fit model parameters or access to historical data.
Regarding the "reducing bias by second-order information" category, our study tackles a problem distinct from those addressed in previous works. They primarily concentrate on the aggregation of forecasts. The settings and methodologies of most works cannot be extended to comparing paper quality in peer review contexts.
The Surprisingly Popular (SP) method can be applied in our setting [3]. The SP method calculates the "prediction-normalized vote" for all signals $i$ as $\frac{w_i}{q_i}$ and asserts that the signal $\arg\max_i \frac{w_i}{q_i}$ is the best signal. Thus, when signals are binary ('reject': 0, 'accept': 1), the SP score of a paper is given by the "prediction-normalized vote" of 'accept' minus that of 'reject', i.e., $\frac{w_1}{q_1}-\frac{w_0}{q_0}$. However, the SP score is not invariant to noise. In some instances, even with infinite reviewers, the SP score fails to determine the relative quality of two papers.
[3] Prelec, Dražen, H. Sebastian Seung, and John McCoy. "A solution to the single-question crowd wisdom problem." *Nature* 541.7638 (2017): 532-535.
**An example that the SP score fails to work**
* Signals are binary ('reject'/'accept').
* The prior over the states is $w_1\sim \text{Uniform}(0,1)$ (paper’s quality is distributed uniformly).
* Paper A has true state $\mathbf{w}_A=[1/3,2/3]$, i.e., paper A has true quality $2/3$.
* Paper A receives no noise, i.e., the noise matrix of paper A is $\mathbf{M}_A=\left[\\begin{matrix}1 & 0\\\\0 & 1\\\\ \\end{matrix}\right]$.
* Paper B has true state $\mathbf{w}_B=[1/4,3/4]$, i.e., paper B has true quality $3/4$, which is better than paper A.
* Paper B receives positive noise. The noise matrix of paper B is $\mathbf{M}_B=\left[\\begin{matrix}1/3 & 2/3\\\\0 & 1\\\\ \\end{matrix}\right]$.
For paper A, $\hat{\mathbf{w}}_A=[1/3,2/3]\, \hat{\mathbf{q}}_A=[1/2,1/2]$. The SP score of paper A is $\frac{2/3}{1/2}-\frac{1/3}{1/2}=\frac{2}{3}$.
For paper B, $\hat{\mathbf{w}}_B=[1/12,11/12]\, \hat{\mathbf{q}}_B=[1/6,5/6]$. The SP score of paper B is $\frac{11/12}{5/6}-\frac{1/12}{1/6}=\frac{3}{5}$.
As a result, paper B has a better true quality than paper A, but has a lower SP score, even if there are infinite reviewers.
### Q2: Purpose of the invariant surprisal vector
The core idea is that a metric can facilitate quantitative comparisons across different papers only if it is invariant to systematically biased noise. Without invariance, comparisons for certain papers can easily be skewed by biased noise. For instance, with one noise pattern, paper A outscore paper B, while with a different biased noise pattern, paper B outscore paper A.
Specifically, the invariant surprisal vector forms the main building block for the design of the surprisal-based score. This score effectively compares papers as if we had their true quality, especially as the number of reviewers goes to infinity. This property crucially relies on the invariance. We will clarify it in the final version.
### Q3: Could the authors clarify the aspects of Section 1 and 2
**Clarification of Example 2**
In Example 2, we want to express that "a higher noise level would result in a lower expected score for high-quality papers", rather than "a higher noise level would result in a lower expected score".
In popular topics, the abundance of expert reviewers ensures a diminished noise in the review process. This ensures that the evaluations from reviewers align closely with the true paper quality. Conversely, in the topics where expert reviewers are scarce, the review process is prone to conservative noise. Consequently, there's a higher probability for reviewers' evaluations to deviate from the actual quality of the papers.
**Clarification of which aspects are observable by agents and by the mechanism**
In brief, each paper has a true state $\mathbf{w}$ representing its actual quality (nobody knows). Reviewer $i$ is aware of the prior distribution $\mathbf{Q}$ over $\mathbf{w}$ and receives a signal $x_i$ reflective of the paper's true state $\mathbf{w}$. Using both the prior $\mathbf{Q}$ and the signal $x_i$, reviewer $i$ forms a posterior distribution on $\mathbf{w}$ to anticipate the evaluations of fellow reviewers, represented as $\mathbf{q}_i$. The mechanism aggregates all signals and predictions $(\mathbf{q}_i, x_i)$ to assign the paper a score $S^*$. In the final version, we will relocate the symbol clarifications (Table 1, currently in the appendix on page 17) back to section 2 for ease of reference.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for your thorough response and clarifications.
- The example showing the failure of the SP mechanism was helpful. Since this mechanism is applicable here, I think it would be good to add SP to the experimental comparison as an alternative baseline (or to explain why such a comparison wouldn't be useful).
- Regarding example 2: My understanding of the example is still that that bias in the noise is the issue ("conservative noise"): i.e., if the paper with non-expert reviewers has additional zero-mean noise in the ratings, this wouldn't change the expected fraction of accepts. In the paper, only the term "more noisy ratings" is used without mentioning the bias, which I think is more commonly interpreted as zero-mean noise. This is particularly confusing since the following example 3 specifically focuses on biased noise.
- I agree that moving the symbol table to the main paper would help improve the clarity.
After reading the other reviews and the author response, I will improve my score from a 4 to a 5 for the time being. The contextualization of the work within the literature was helpful, and I hope that the proposed changes will improve the clarity. Despite this, my main concern about the presentation of the work still stands, and I still remain somewhat unsure about the significance of the results.
---
Reply to Comment 1.1.1:
Comment: ### Using SP for comparison
Thank you for your feedback!
The method proposed by Prelec et al. (2017) was designed for aggregating multiple reports into a single decision, rather than generating scores for comparing two alternatives. Specifically, when signals are binary, their approach outputs 1 if $\frac{w_1}{q_1}>\frac{w_0}{q_0}$, and 0 otherwise. While effective for combining evaluations into a single decision, this does not directly produce scores that can be used to compare the relative quality of two papers or alternatives.
To enable such comparisons, we propose a new score based on the surprisingly popular idea: $\frac{w_1}{q_1}-\frac{w_0}{q_0}$: this measures the amount of surprisal for 'accept', subtracting the amount of surprisal for 'reject'. We call it the SP-inspired score.
We run the same numerical experiments as Figure 4 and 5. Since there is no place to attach figures, we describe the simulation results in words.
* SP-inspired score and the surprisal based score have similar performance in the settings considered by Figures 4 and 5 when the number of reviewers is small like $n=3,5$, and the surprisal based score outperforms the SP-inspired score as the number of reviewers increase. Both of them are better than the simple average.
Conceptually, the example we provided before demonstrates that the SP-inspired score does not calibrate based on the amount of noise. Without calibrating for noise amount, the SP-inspired score will introduce substantial bias against papers with high-quality but more noisy evaluations. Because of the lack of noise calibration, the error rate of the SP-inspired score does not converge to zero as the number of reviewers increases. This makes it challenging to provide theoretical guarantees on the SP-inspired score's performance across all cases. In contrast, the surprisal-based score does calibrate, allowing its error rate to decrease with more reviewers, backed by theoretical guarantees.
However, without calibration, while suffering from bias, the SP-inspired score has a lower variance. When reviewer numbers are small, the SP-inspired score achieves similar average performance to the surprisal-based score.
In the final version, we will include the SP-inspired score and expanded comparison results to further illustrate the strengths and limitations of both methods.
### Example 2
Thanks for pointing it out. We will clarify this in Example 2. The non-experts reviewers have challenges to evaluate technical quality and novelty, but have access to superficial cheap signals. To reflect these limitations, in our setting, the non-expert reviews are modeled not as zero-mean noise, but as biased towards the prior rating for papers. | Summary: This paper aims to detect and correct bias in Peer Review. They propose a one-shot noise calibration process without any prior information. Experiments are conducted on the binary case to show the effectiveness of the proposed method.
Strengths: 1. The studied problem is important.
2. Theoretical guarantee is provided for the proposed calibrated score.
3. The organization of this paper is clear.
Weaknesses: 1. The rationality of the proposed surprisal scores needs further support. For example, in Figure 1, it seems that the left paper with lower negative reviewers’ prediction is the same as the right paper with higher negative reviewers’ prediction, while the left paper receives 1 accept and 2 reject while the right paper receives 2 accept and 1 reject. This result is confusing since the reviewers may give low P_0,1 simply due to the poor quality of the left paper.
2. The experiments are weak in demonstrating the generality of the proposed method. For example, in Sec 6, the authors only conduct experiments on the binary case, lacking the more general settings with more types of possible signals.
3. The theoretical guarantees lack a clear explanation. For example, in theorem 2, why there is 1/2 in the error probability (Pr[S(A) > S(B)|w1^A,w2^B]+1/2 Pr[S(A)=S(B)|w1^A,w1^B])? The guarantee of such an error probability may still remain a gap in the performance guarantee of the proposed method.
4. Some assumption or claims lacks further support and needs to offer more clarification or explanation. For example, in Sec 2.2, why “only consider the noise where M is invertible?” What will be the situation when M is non-invertible? Why can construct the vector q and the joint distribution matrix U from the prediction matrix P as stated in claim 1? Is it universally applicable to assume that 'each individual receives the clean signal with probability 1-\lambda’?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. In theorem 2, why there is 1/2 in the error probability (Pr[S(A) > S(B)|w1^A,w2^B]+1/2 Pr[S(A)=S(B)|w1^A,w1^B])? The guarantee of such an error probability may still remain a gap in the performance guarantee of the proposed method.
2. In Sec 2.2, why “only consider the noise where M is invertible”? What will be the situation when M is non-invertible?
3. Why can construct the vector q and the joint distribution matrix U from the prediction matrix P as stated in claim 1?
4. Is it universally applicable to assume that 'each individual receives the clean signal with probability 1-\lambda’?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have discussed the limitations of this paper in Sec 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer UucL
Thank you for your insightful comments and suggestions.
### W1: Rationality of the proposed surprisal scores
Our understanding of your question suggests that you are seeking a more intuitive explanation of Figure 1, especially concerning why our method yield identical scores in the following two scenarios:
* Scenario A: 1 accept and 2 rejects, where the 'accept' reviewer predicts $P_{1,1}=0.55$ and the 'reject' reviewers predict $P_{0,1}\approx 0.07$.
* Scenario B: 2 accepts and 1 reject, where the 'accept' reviewers predict $P_{1,1}=0.55$ and the 'reject' reviewer predicts $P_{0,1}\approx 0.4$.
In Scenario A, the 'reject' reviewers anticipated that very few would vote for 'accept'. However, actually 1/3 of the reviewers vote for 'accept', making 'accept' a surprisingly popular signal. This suggests that the reviewers might have been swayed by noise skewing towards 'reject'. In contrast, Scenario 2 witnessed 'accept' votes aligning closely with the predictions, suggesting minimal interference from noise. According to our model, when a voting outcome exceeds the crowd's predictions, it is more likely to be accurate. This reasoning accounts for the identical scores assigned to both scenarios.
### Q1(1): In theorem 2, why there is 1/2 in the error probability (Pr[S(A) > S(B)|w1^A,w2^B]+1/2 Pr[S(A)=S(B)|w1^A,w1^B])?
The occurrence of 1/2 arises when we use random selection to break ties when comparing two papers that have the same scores. For instance, a tie might arise when all reviewers vote for "accept" to both papers (scores being set to $+\infty$). In these situations, there is no additional information to compare the quality of two papers, which induces 1/2 in error probability.
In real-world scenarios, we typically only compare papers that have received at least one "accept" and one "reject" evaluation. This is because papers with unanimous 'accept' evaluations are generally accepted, while those with all 'reject' evaluations are rejected. Given this practice, the likelihood of two papers garnering the same score is low.
### Q1(2): The guarantee of such an error probability remains a gap in the performance guarantee of the proposed method.
The main reason of the gap is that our theoretical guarantees encompass all conceivable scenarios, but simulation experiments evaluate only a finite set of situations. It is crucial to underscore the importance of theoretical guarantees, as they validate the effectiveness of our mechanism. By 'effectiveness', we mean that given a sufficient number of reviewers, our method can attain arbitrarily low error rates; even with a limited number of reviewers, the error rate remains constrained and predictable.
### Q2: Why only consider the noise where M is invertible
The assumption of "invertible correlation" is prevalent in peer prediction literature [1-2]. In our paper, the term "invertible noise" (i.e., noise matrix $M$ is invertible) signifies an "invertible correlation" between each reviewer's received signal and their clean signal. In the binary case, if $M$ is non-invertible, the reviewer's received signal and clean signal become independent, rendering the received signal uninformative. For general cases, the non-invertibility of $M$ implies that there exists two papers with distinct true qualities but receive the same evaluation distribution from reviewers. Such facts prevent the existence of an effective aggregation method when noise is non-invertible.
[1] Kong, Yuqing. "Dominantly truthful multi-task peer prediction with a constant number of tasks." Proceedings of the fourteenth annual acm-siam symposium on discrete algorithms. Society for Industrial and Applied Mathematics, 2020.
[2] Schoenebeck, Grant, and Fang-Yi Yu. "Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach." 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2021.
### Q3: Why can construct the vector q and the joint distribution matrix U from the prediction matrix P as stated in claim 1
Claim 1 in this paper is initially introduced by Prelec et al. in the context of the Surprisingly Popular (SP) method [3]. Here we briefly delve into the theorem's intuition. Applying Bayes' rule, we deduce $q_t=q_s\frac{P_{s,t}}{P_{t,s}}$. Given that $\sum_t q_t = 1$, we can solve for the prior probability of $q_s$: $q_s = (\sum_{t}\frac{P_{s,t}}{P_{t,s}})^{-1}$. Regarding the joint distribution $U$, by definition, we have $U_{s,t}=q_s P_{s,t}$.
[3] Prelec, D., Seung, H. & McCoy, J. A solution to the single-question crowd wisdom problem. Nature 541, 532–535 (2017).
### Q4: Is it universally applicable to assume that each individual receives the clean signal with probability 1-lambda?
In the paper we define $\mathcal{M}^*$ as the noise family where an individual receives the clean signal with a probability of $1-\lambda$ and a biased value (which is independent of quality) with a probability of $\lambda$. In Claim 2, we establish that $\mathcal{M}^*$ encompasses all positively correlated ($M_{1,1} > M_{0,1}$) and invertible noises in the context of binary signals. This emphasizes that $\mathcal{M}^*$ serves as a universal noise model for binary signals. However, in general settings, $\mathcal{M}^*$ is not comprehensive enough to describe all potential noise. In Appendix B, we introduce a modified metric that remains invariant across all possible invertible noises. We also demonstrate its capability in identifying the true state $\mathbf{w}$ under specific circumstances. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful suggestions and comments on our manuscript. We are pleased that the reviewers recognize the novelty (Reviewer SV22, tv7v, VrqZ), clarity in writing and organization (Reviewer SV22, UucL, tv7v), as well as the sound theoretical guarantees and synthetic experiments (Reviewer SV22, UucL, 6t8A). We will address the inquiries and comments from each reviewer separately. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper tries to solve famous problem of removing bias and noise during peer-review process. The authors address the problem in one-shot setting (without historical data) and propose a novel approach that allows to remove influence of bias and noise (under some assumptions on bias and noise) on the ranking of assessed elements (reviewed papers). The approach uses scoring adjustment (used for ranking) and requires additional signal (action) from each reviewer.
The authors provide theoretical guarantees and small synthetic experiments.
Strengths: - Overall good organization of the paper storyline (very good Intro!)
- More or less clear statements and easy to follow
- Synthetic experiments
Weaknesses: - Some drawbacks in clarity related to assumptions and limitations of the setup
- Seem limited practical effect
- The contribution does not look very enough for NeurIPS
- Lack of comparison with alternative ways to improve scoring (e.g., non-1-shot ones, etc)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. It would be nice to understand whether the assumptions are realistic (for instance, Lines 59-60 “the correct answer is positively correlated to each agent’s signal” ; Lines 82 83 “assume that the noise is positively correlated .. ”; and the other). Right now the assumptions look theoretical with no clear understanding, e,g., how frequent they are in practice.
2. The specific assumption about particular cases of noises (Lines 203-205) looks very important while being deferred to Sec3 as Definition 1 (instead of being kept in Section 2 where environment of the problem is described). It makes feeling of overclaiming in Intro, where our Results (Lines 85-91) are stated without assumption of considering invertible noise (while other assumptions and limitations are addressed before Line 85). It would be nice to have some presentation improvements to address this gap.
3. The authors claim that it is nice to have the result about best ranking when the number of reviewers goes to infinity. However, I believe it is not so important result to initially stated problem, where the limitations on the number of reviewers is crucial part of the problem (in most peer review practice we deal with 3-5 reviewers, e.g., see the case of NeurIPS). So, OK to have such theorem, but this result seems not so important and relevant to the problem taken for the research. It would be nice, if the authors focus at least the presentation more on how their results help in real practical cases like n = 3-5. (Yes, I see that the authors are aware of such numbers by providing experiments with such numbers. But (a) it might be just because of simplification of experimentation; and (b) the main contribution lies in the theoretical part of the work).
4. While reading from the first page till the end, I had observed changing perception (for sure, mine, subjective) of the studied setting. From the beginning of the paper till Page 6 (Lines 189-190), I thought that the setup assumes that all reviewers assess all papers, while, in practice, each agent review few numbers of papers. Sections 2.1 2.2 have made me sure that the setup is so strange. Only, Lines 189+ reverted my perception. So, I would strongly recommend to somehow help such readers as I figure out this earlier than Page 6.
5.The statement in Lines 168-169 «we focus on homogeneous noise setting where the noise is the same for all reviewers and will discuss the heterogeneous setting in Section 7" makes feeling of covering “heterogeneous setting” by this work. After reaching Section 7, I have found that Section 7 is Conclusion section and heterogeneous setting are discussed as some future work... Missed expectation
========
AFTER REBUTTAL
I thank the authors for answering the questions. I hope that the answers will be reflected and fully addressed in the new revision of the work.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Some presentation improvements may address limitations better
See points 1, 2 and 3 in the field “Questions”.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reviewer SV22
Thank you for your insightful comments and suggestions.
### Q1: Whether the assumptions are realistic
**Positively correlated**
In the assumptions mentioned in lines 59-60, "positively correlated" indicates that the correct answer has a positive correlation with the signal, as discussed in [1]. For example, if the signal is "yes", the posterior belief that the correct answer is "yes" would be greater than the prior belief. In our model (lines 82-83), the positive correlation, denoted as $M_{1,1}>M_{0,1}$, indicates that the reviewer's clean signal correlates positively with her received noisy signal. This assumption is realistic, as a negative correlation would imply that if the reviewer receives 'accept', her posterior belief regarding her clean signal (which she can ascertain with full effort) being "accept" would be lower than the prior belief, which is illogical. We will clarify the distinction of "positively correlated" in lines 59-60 and 82-83 in the final version.
**Non-degenerate noise**
The assumption of "invertible correlation" is prevalent in peer prediction literature [2-3]. In our paper, the assumption of "non-degenerate noise" in lines 82-83 (and "invertible noise" in line 206) means a "invertible correlation" between each reviewer's received signal and their clean signal. In the binary case, if $M$ is non-invertible, the reviewer's received signal and clean signal become independent, rendering the received signal uninformative. In general cases, the non-invertibility of $M$ implies that there exists two papers with distinct true qualities but receive identical evaluation distribution from reviewers. Such facts prevent the existence of an effective aggregation method when noise is non-invertible.
In the final version, we will standardize the terms "non-degenerate noise" and "invertible noise" to consistently use "non-degenerate noise".
[1] Prelec, D., Seung, H. & McCoy, J. A solution to the single-question crowd wisdom problem. *Nature* **541**, 532–535 (2017).
[2] Kong, Yuqing. "Dominantly truthful multi-task peer prediction with a constant number of tasks." Proceedings of the fourteenth annual acm-siam symposium on discrete algorithms. SODA 2020.
[3] Schoenebeck, Grant, and Fang-Yi Yu. "Learning and Strongly Truthful Multi-Task Peer Prediction: A Variational Approach." ITCS 2021.
### Q2: Results (Lines 85-91) are stated without assumption of considering invertible noise
The results (lines 82-91) are stated under the assumption that the noise is non-degenerate. In our paper, the term "non-degenerate" is synonymous with "invertible". In the final version, we will standardize the terms "non-degenerate noise" and "invertible noise" to consistently use "non-degenerate noise", and amend the phrase "Within the model, .." in line 85 to "Within the model and assumptions, ..". Thank you for pointing this out.
### Q3: How the results help in real practical cases like 3-5 reviewers
Theorem 2 offers results for finite values of $n_A$ and $n_B$. Here, $n_A$ represents the number of reviewers for paper A, while $n_B$ represents the number of reviewers for paper B. Consequently, Theorem 2 establishes an upper bound on the error probability for realistic practical cases, such as when $n=3,5$.
For a more practical perspective, consider this: usually, we only wish to compare papers that have received at least one "accept" and one "reject" evaluation. This is because papers with unanimous 'accept' evaluations are typically accepted, while those with all 'reject' evaluations are rejected. In such scenarios, the theoretical upper bound can be simplified to: $\exp\left(-\frac{2(w_1^B-w_1^A)^2}{\frac{1}{n_A(1-\lambda_A)^2}+\frac{1}{n_B(1-\lambda_B)^2}}\right)$. Here, $w_1^B-w_1^A$ signifies the average disparity in true quality, and $\frac{1}{n_A(1-\lambda_A)^2}+\frac{1}{n_B(1-\lambda_B)^2}$ relates to the number of reviewers and the maximum bias.
This bound underscores that our methodology's error rate diminishes as the gap in true quality grows more pronounced. For example, when there are 5 reviewers with a maximum bias of 0.3, the error bound approximates to $\exp\left(-2.5|w_1^B-w_1^A|\right)$. The bound is reasonable, especially given that the bias can be arbitrary. To further elucidate, in Section 6, we incorporate simulation experiments to compare our approach with the baseline.
### Q4: Observed Changing perception of the studied setting
Thank you for the valuable feedback. We will specify within Example 1 that reviewers have the flexibility to assess any number of papers, and that different papers may be subject to varying biases.
### Q5: Discussion of heterogeneous reviewer noise
In one-shot settings, tackling heterogeneous noise is challenging, primarily because the mechanism has no access to historical data from reviewers. Moreover, each reviewer typically assesses only a few papers during the review cycle. While existing research on peer grading employs Gibbs sampling to gauge the bias of each evaluator, this method necessitates a significant number of evaluations from each evaluator [4-5]. Practically, certain techniques can mitigate noise at the reviewer level. For instance, using benchmark papers can help calibrate reviewers' evaluations. In our final version of the paper, we will reintroduce the relevant discussion that was previously condensed due to page limit.
[4] Piech, Chris, et al. "Tuned models of peer assessment in MOOCs.".
[5] Mi, Fei, and Dit-Yan Yeung. "Probabilistic graphical models for boosting cardinal and ordinal peer grading in MOOCs." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 29. No. 1. 2015. | null | null | null | null | null | null |
Beyond Unimodal: Generalising Neural Processes for Multimodal Uncertainty Estimation | Accept (poster) | Summary: The paper tackles the problem of uncertainty estimation specifically for multi-modal data, i.e., inputs consisting of different sources. Specifically, it improves the popular Neural Process (NP) in three aspects: dynamic context update, multi-modal Bayesian aggregation, and a novel attention mechanism based on the RBF kernel. The paper demonstrates superior test accuracy, uncertainty estimation (calibration and OOD detection), and robustness on multiple benchmarks compared to prior works.
Strengths: * The paper studies the less well-studied area of uncertainty estimation for multi-modal data. This has practical implications as many real-world applications can be multi-modal.
* The paper presents multiple metrics including accuracy, calibration, OOD detection, and robustness, which provide a holistic evaluation of the proposed method.
* The idea of dynamic context update is most interesting to me. Specifically, the method replaces uninformative context samples with informative ones where "uninformative-ness" is measured by attention weights and "informative-ness" is measured by difficulty of classification.
Weaknesses: * While the paper presents three innovations and claims that they are all tailored to multi-modal data, only the Bayesian aggregation is inherently related to multi-modal inputs. The dynamic context update and the RBF context mechanism do not utilize the multi-modal characteristic. This makes the contributions and claims less coherent.
* It seems the Bayesian aggregation mechanism is not ablated. It's not clear how much improvement it brings to the overall performance as it is the main mechanism responsible for tackling multi-modal inputs. Does the improved robustness come from the aggregation algorithm or the other two components?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Can the authors comment on the importance of the Bayesian aggregation component?
* It would be good to consolidate a single table (for one dataset) to directly show the effectiveness of each component in terms of accuracy, uncertainty, and robustness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The work does not have an apparent negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments. We would like to address the following statements.
> “While the paper presents three innovations and claims that they are all tailored to multi-modal data, only the Bayesian aggregation is inherently related to multi-modal inputs. The dynamic context update and the RBF context mechanism do not utilize the multi-modal characteristic. This makes the contributions and claims less coherent.”
Please note that our method is a joint neural process framework for uncertainty estimation of multimodal data. Although the framework consists of several components, all of them work coherently in an integrated way. Our proposal of dynamic context update and RBF enables neural processes to more accurately capture uncertainties, while the Bayesian aggregation makes neural processes able to deal with multimodal data. Our ablation study shows that the proposed framework is unable to work as effectively or efficiently as it does without any of the modules. From the perspective of multimodal uncertainty estimation via neural processes, we believe the innovations are coherent.
> “It seems the Bayesian aggregation mechanism is not ablated. It's not clear how much improvement it brings to the overall performance as it is the main mechanism responsible for tackling multi-modal inputs. Does the improved robustness come from the aggregation algorithm or the other two components?” “Can the authors comment on the importance of the Bayesian aggregation component?”
We thank the reviewer for the constructive suggestion. We conducted additional ablation studies on MBA with two baselines namely “Concat” and “Mean”. “Concat” bypasses MBA and directly provides $r^m_*$ of multiple modalities to the decoder (see Figure 1) by simple concatenation followed by passing to a MLP which lets $p(f(T^M_X)|C^M,T^M_X)$ in Equation (12) be parameterised by a decoder where $\\{C^M,T^M_X\\}=MLP(Concat(\\{r^m_*\\}^M\_{m=1}))$. $Concat(\\cdot)$ represents concatenating multiple vectors along their feature dimension. Similarly, “Mean” also bypasses MBA and simply averages the multiple modalities into single representation. Formally, $p(f(T^M_X)|C^M,T^M_X)$ parameterised by a decoder where $\\{C^M,T^M_X\\}=\\frac{1}{M} \\sum^M\_{m=1}r^m_*$. We conducted the same main experiments, and the results are shown below:
Test accuracy ($\\uparrow$).
\\begin{array}{ccccccc} \\hline {} & \\text{Handwritten} & \\text{CUB} & \\text{PIE} & \\text{Caltech101} & \\text{Scene15} & \\text{HMDB} \\\ \\hline \\text{Concat} & 99.35\\pm0.22 & 89.00\\pm1.24 & 89.71\\pm2.49 & 92.63\\pm0.18 & 77.18\\pm0.64 & 56.06\\pm2.13 \\\ \\text{Mean} & 99.45\\pm0.11 & 92.50\\pm2.43 & 90.88\\pm2.24 & 93.14\\pm0.25 & 77.60\\pm0.56 & 57.80\\pm1.97 \\\ \\text{MBA} & \\mathbf{99.50\\pm0.00} & \\mathbf{93.50\\pm1.71} & \\mathbf{95.00\\pm0.62} & \\mathbf{93.46\\pm0.32} & \\mathbf{77.90\\pm0.71} & \\mathbf{71.97\\pm0.43} \\\ \\hline \\end{array}
Test ECE ($\\downarrow$).
\\begin{array}{ccccccc} \\hline {} & \\text{Handwritten} & \\text{CUB} & \\text{PIE} & \\text{Caltech101} & \\text{Scene15} & \\text{HMDB} \\\ \\hline \\text{Concat} & 0.007\\pm0.001 & 0.109±0.008 & 0.092±0.020 & 0.038±0.005 & 0.061±0.005 & 0.060±0.017 \\\ \\text{Mean} & 0.006±0.001 & 0.057±0.012 & 0.059±0.008 & 0.030±0.004 & 0.038±0.005 & 0.117±0.014 \\\ \\text{MBA} & \\mathbf{0.005±0.001} & \\mathbf{0.049±0.008} & \\mathbf{0.040±0.005} & \\mathbf{0.017±0.003} & \\mathbf{0.038±0.009} & \\mathbf{0.028±0.006
} \\\ \\hline \\end{array}
Average test accuracy across 10 noise levels ($\\uparrow$).
\\begin{array}{ccccccc} \\hline {} & \\text{Handwritten} & \\text{CUB} & \\text{PIE} & \\text{Caltech101} & \\text{Scene15} & \\text{HMDB} \\\ \\hline \\text{Concat} & 97.71±0.46 & 85.51±1.42 & 85.94±2.48 & 89.84±0.17 & 72.23±0.52 & 45.22±2.86 \\\ \\text{Mean} & 98.42±0.09 & 88.27±1.83 & 88.74±2.33 & 92.07±0.16 & 74.06±0.28 & 49.58±2.24 \\\ \\text{MBA} & \\mathbf{98.58±0.10} & \\mathbf{88.96±1.98} & \\mathbf{93.80±0.49} & \\mathbf{92.83±0.18} & \\mathbf{74.14±0.35} & \\mathbf{64.11±0.15} \\\ \\hline \\end{array}
Test accuracy ($\\uparrow$), ECE ($\\downarrow$), and OOD detection AUC ($\\uparrow$).
\\begin{array}{ccccc} \\hline & \\text{Test accuracy} \\uparrow & \\text{ECE} \\downarrow & \\text{OOD AUC (SVHN)} \\uparrow & \\text{OOD AUC (CIFAR100)} \\uparrow \\\ \\hline \\text{Concat} & 74.24±0.27 & 0.125±0.005 & 0.781±0.016 & 0.728±0.004 \\\ \\text{Mean} & 74.72±0.24 & 0.109±0.003 & 0.803±0.007 & 0.742±0.003 \\\ \\text{MBA} & \\mathbf{74.92±0.07} & \\mathbf{0.011±0.001} & \\mathbf{0.872±0.002} & \\mathbf{0.786±0.005} \\\ \\hline\\end{array}
In every case, MBA outperformed both baselines. While similar performance was observed for Handwritten, Scene15, and Caltech101, large differences were observed in CUB, PIE, and HMDB across different metrics. The test accuracy of CIFAR10 is almost consistent across all methods, but large gaps in ECE and OOD performance were observed. This highlights the importance of MBA, especially in robustness and calibration performance.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks to the authors for adding the ablation study, this addressed my second question. But still, I think that the dynamic update and RBF context mechanism are not inherent progress towards multi-modality even though they do contribute to more accurate uncertainty estimation. Therefore, I keep my original score. | Summary: This work proposes a confidence calibration algorithm for multimodal classification problems. The algorithm includes three key components: 1) dynamic context memory, 2) multimodal Bayesian aggregation, and 3) adaptive radial basis function. The algorithm is evaluated on multiple benchmarks using classification accuracy, calibration, and OOD performance. On multiple benchmarks, the proposed algorithm is able to obtain SOTA results.
Strengths: 1, The targeted problem and challenges are explicitly stated. For each challenge, a technically reasonable solution is proposed.
2, The proposed algorithm is able to obtain strong empirical results, obtaining SOTA results on multiple benchmarks and different metrics.
3, Comprehensive ablation studies have been conducted to demonstrate each components.
4, As illustrated in Figure 2, the proposed adaptive RBF algorithm outperformed other attention mechanisms.
Weaknesses: 1, Some notation definitions are confusing. Please refer to section "Questions" for details.
2, Because of the above notation section 3.2 is a bit hard to follow.
3, Ablation study is conducted for individual components. It would be interesting to design ablation studies and see how each component affects the end-to-end performance.
4, The component directly related to multimodality learning, i.e., multimodal Bayesian aggregation, is based on previously work [1]. It would be good if the authors can explain how the proposed MBA component is different from previous work, either in the revision or rebuttal.
[1] M. Volpp, F. Flürenbrock, L. Grossberger, C. Daniel, and G. Neumann. Bayesian context aggregation for neural processes. In International Conference on Learning Representations, 2021.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1, **Line 71**, from my understanding, the context set C and target set T contains both training and test datasets. But $N_C +N_T=N_{\text{train}}$. Why is the summation of $N_C$ and $N_T$ only equal to the number of training points?
2, **Line 148 and 150**, what are $r^m$ and $s^m$? How are they connected/different from $r^m_{\*}$ and $s^m_{\*}$?
3, **Line 160**, how are encoders parameterized by $\theta$ and $\omega$ connected/different from those by $\phi$ and $\psi$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors explicitly discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and questions. We would like to address the following questions.
> "Line 71, from my understanding, the context set C and target set T contains both training and test datasets. But $N_C +N_T=N_{\\text{train}}$. Why is the summation of $N_C$ and $N_T$ only equal to the number of training points?"
We would like to clarify that a context set and a target set are required for training and test datasets separately, This means that during training, the training dataset can be split to a context set and a target set (i.e., $N_C +N_T=N_{\\text{train}}$). Similarly, during testing, a context set and a target set are required. However, since we have no label information of the test dataset, the context set has to be given from somewhere else. A simple solution can be using the entire training set as the labelled context set, but it is highly inefficient especially for large datasets. Therefore, we have proposed dynamic context memory to store a smaller number of informative training samples during training that can be used in the inference stage.
> “Line 148 and 150, what are $r^m$ and $s^m$? How are they connected/different from $r^m_*$ and $s^m_*$?”
$r^m$ and $s^m$ are the encoded context representations which have no information about target samples. The attention mechanism measures the distance between the target samples and produces the weighted sum of $r^m$ and $s^m$ for specific target samples. These target-specific context representations are $r^m_*$ and $s^m_*$.
> “Line 160, how are encoders parameterized by $\\theta$ and $\\omega$ connected/different from those by $\\phi$ and $\\psi$?”
The encoders parameterised by $\\theta$ and $\\omega$ and those by $\\phi$ and $\\psi$ are used to approximate different distribution parameters. To be precise, $u^m$ encoded with $\\theta$ represents the mean of prior, $q^m$ encoded by $\\omega$ represents the diagonal covariance of prior, $r^m_*$ encoded by $\\phi$ represents the samples from $p({r^m_*}\_i \vert z_i)$, and $s^m_*$ encoded by $\\psi$ represents the diagonal covariance of $p({r^m_*}\_i \vert z_i)$. Each of the encoders is implemented as an independent neural network.
> “Ablation study is conducted for individual components. It would be interesting to design ablation studies and see how each component affects the end-to-end performance.”
Due to the page limitation, the ablation studies were put in the appendix. We would like to highlight that we conducted all ablation studies which are outlined in Appendix C using the same experimental procedures including the datasets as our main experiments in order to show the effectiveness of each component.
> “The component directly related to multimodality learning, i.e., multimodal Bayesian aggregation, is based on previously work [1]. It would be good if the authors can explain how the proposed MBA component is different from previous work, either in the revision or rebuttal.”
Our proposed MBA has technical connections with the previous work [1]. Both methods are Bayesian approaches that aggregate multiple representations into a single one (latent variable $z$) by modelling the prior and likelihood of $z$ to obtain the posterior. However, there are significant differences between the previous work [1] and ours, regarding their intended applications and model specifications.
The previous study [1] aimed to aggregate representations of different instances (or observations) into a single representation of latent variable $z$ for the context samples, using an uninformative prior on $z$. Since its primary purpose is to summarise context samples, it has no interactions with target samples.
In contrast, our approach aggregates multiple modalities of the same instance (or an observation) into a single representation of latent variable $z$. We incorporate an informative prior $\\mathcal{N}(u^m,\\text{diag}(q^m)$ (Equation (6), L159, and Figure 1), derived from the context memory. Furthermore, by considering a target-specific representation ${r^m_*}\_i $ (the output of the proposed adaptive RBF attention) as a sample of $p({r^m_*}\_i \vert z_i)$, we model $p({r^m_*}\_i \vert z_i)$ as a distribution after observing a target sample, and $p(z_i)$ as a distribution prior to observing a target sample.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for response. They have addressed all my questions and concerns. After considering the responses and the good empirical results, I decided to raise my score. | Summary: This paper proposes a multimodal neural processes (neural network generaliation of Gaussian processes) model.
The overall approach has several novel elements:
* A way to maintain a dynamic context set throughout training (e.g a support set for few-shot learning, these context sets are needed for neural processes)
* A Bayesian aggregation scheme for combining multiple modalities
* An adaptive RBF attention mechanism as an alternative to (vanilla) dot product attention which the authors argue is overconfident on OOD samples.
Experiments on a suite of datasets show that the proposed method is faster, more accurate and better calibrated, and better at detecting OOD samples than prior work.
Strengths: This reads like a polished piece of work with clear writing, extensive ablations (in the supplementary) and novel technical contributions. Experimental results seem fairly convincing (that the proposed method is indeed better than prior similar works). My caveat is that my familiarity with this part of the literature on Neural Processes is passing at best.
Weaknesses:
I have no major complaints about this paper, but some things to point out are:
It is pretty unclear what are the actual multiple modalities in the datasets that are used for experiments (which makes these experiments not particularly compelling perhaps unless you are already familiar with this line of work). I had to trace through a series of cited works to figure out that (likely) these different modalities are different features extracted e.g using different networks.
Section 3.2 could be written more clearly — I couldn’t figure out why there were multiple encoders until encountering Lemma 3.1 — it’d be much better to explain this up front.
A nit: it’s also not clear when reading section 3.1 how often the dynamic context memory is meant to be updated until I read the pseudocode much later (I’d recommend saying that it’s a per-minibatch update somewhere).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback. We would like to address the reviewer’s suggestions.
> “It is pretty unclear what are the actual multiple modalities in the datasets that are used for experiments”
We acknowledge that we have missed the details of the input modality in Appendix B.1. We will add the types of input modalities and the preprocessing steps for each dataset in our revised manuscript.
> “Section 3.2 could be written more clearly — I couldn’t figure out why there were multiple encoders until encountering Lemma 3.1 — it’d be much better to explain this up front. A nit: it’s also not clear when reading section 3.1 how often the dynamic context memory is meant to be updated until I read the pseudocode much later (I’d recommend saying that it’s a per-minibatch update somewhere).”
Thank you for the suggestion. We agree with the reviewer. We will elaborate why multiple encoders are necessary in the beginning of Section 3.2, and explain the per-minibatch update in the beginning of Section 3.1. | Summary: This paper proposes a new method for multimodal uncertainty estimation by extending neural processes. The authors summarize three challenges to do that and give solutions correspondingly. Experimental results show that the proposed method is more robust and outperforms existing baselines.
---
Thanks for the clarification! My main concerns are addressed.
Strengths: The proposed method (MNPs) can achieve good performance empirically.
Weaknesses: 1. The motivation is not clear. Indeed, there is little discussion about neural processes with multimodal data, and extending unimodal neural processes to multimodal scenarios could be challenging. However, existing works discussed the multi-view data [16,21]. The reason why extending the neural processes for multimodal data (instead of extending/improving [16,21]) remains unclear.
2. Lack of understanding. I would like to suggest that the authors add some ablation studies. From current experiments, we can only see that proposed MNPs outperform baselines. However, why it can do that is not clear, e.g., why the unimodal method (DE) can achieve the best test accuracy.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: (Line 185): Maybe I missed something, but why using RBF can address the overconfident issue?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments. We would like to address the following points.
> “However, existing works discussed the multi-view data [16,21]. The reason why extending the neural processes for multimodal data (instead of extending/improving [16,21]) remains unclear.”
As we stated our motivation in L38-46, NPs bring the best of both worlds of GP, which has been shown to be well-calibrated and robust to domain-shift [21, 31, R1], and DNNs, which have representation power and efficiency. The current SOTA model [21] is a GP, which is robust but computationally expensive at the same time. We show in our experiments that we can achieve better or comparable performance with faster computational time (up to five folds). Also, in [21, R2] and our experiments, TMC [16] has been shown to have limited capability in calibration performance and OOD detection, which are essential uncertainty estimation downstream tasks [21, 31]. Therefore, existing methods are either less effective or less efficient, which motivate the development of our approach.
> “I would like to suggest that the authors add some ablation studies. From current experiments, we can only see that proposed MNPs outperform baselines. However, why it can do that is not clear, e.g., why the unimodal method (DE) can achieve the best test accuracy.”
Please refer to Appendix C for ablation studies comparing different context memory updating mechanisms, attention types, and adaptive learning of RBF attention. We would like to highlight that DE has achieved the best test accuracy for only one out of the seven datasets. In the other six datasets, our method outperformed DE. For uncertainty estimation, metrics like ECE, robustness to noise, and OOD detection performance are more important as they directly or indirectly quantify uncertainty estimation performance. For those metrics, our method outperformed DE in all seven datasets, which demonstrates the superior uncertainty estimation performance of our method.
> “Why using RBF can address the overconfident issue?”
We explained the reason in L186-191 of the paper. But please let us elaborate more here. The core component of RBF is the lengthscale controlling the degree of smoothness in distance calculations. To illustrated, considering two points $x$ and $x'$ where ${\\vert\\vert x-x' \\vert\\vert}^2=1$, the RBF value (indicative of closeness) can vary significantly based on the lengthscale: 0.98 if the lengthscale is 5 and 1.93e-22 if the lengthscale is 0.1. This underlines that an appropriately determined lengthscale can distinguish whether or not two samples come from different distributions. As highlighted in L203, this lengthscale parameter is often predetermined as a hyperparameter or an optimisable parameter that requires a complex initialisation. To overcome this limitation, we have proposed $\\mathcal{L}\_{RBF}$ to optimise the lengthscale without any complex initialisation. The effectiveness of this approach is demonstrated in Figure 2, Appendix C.2, and Appendix C.3.
Additional references:
[R1] S. G. Popescu, D. J. Sharp, J. H. Cole, K. Kamnitsas, and B. Glocker. Distributional gaussian process layers for outlier detection in image segmentation. In A. Feragen, S. Sommer, J. Schnabel, and M. Nielsen, editors, Information Processing in Medical Imaging, pages 415–427, Cham, 2021. Springer International Publishing. ISBN 978-3-030-78191-0.
[R2] K. Zou, T. Lin, X. Yuan, H. Chen, X. Shen, M. Wang, and H. Fu. Reliable multimodality eye disease screening via mixture of student’s t distributions. arXiv preprint arXiv:2303.09790, 2023.
[R3] Y. Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. Dillon, B. Lakshminarayanan, and J. Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
[R4] J. Mukhoti, A. Kirsch, J. van Amersfoort, P. H. Torr, and Y. Gal. Deterministic neural networks with appropriate inductive biases capture epistemic and aleatoric uncertainty. arXiv preprint arXiv:2102.11582, 2021. | Rebuttal 1:
Rebuttal: **Continued Rebuttal for Reviewer hoCV**
> “It would be good to consolidate a single table (for one dataset) to directly show the effectiveness of each component in terms of accuracy, uncertainty, and robustness.”
Thank you for the suggestion. We would like to provide the first and the last datasets which are Handwritten and CIFAR10 for the comparison.
Handwritten.
\\begin{array}{c|ccc|ccc|ccc} \\hline & {} & \\text{Context memory} & {} & {} & \\text{Attention} & {} & {} & \\text{Multimodal aggregation} & {} \\\ {} & \\text{Random} & \\text{FIFO} & \\text{MSE (Ours)} & \\text{Dot Softmax} & \\text{RBF Softmax} & \\text{RBF Sparsemax (Ours)} & \\text{Concat} & \\text{Mean} & \\text{MBA (Ours)} \\\ \\hline \text{Test accuracy} \\uparrow & 99.40±0.14 & 99.30±0.11 & \\mathbf{99.50±0.00} & 99.00±0.18 & 98.80±0.45 & \\mathbf{99.50±0.00} & 99.35±0.22 & 99.45±0.11 & \\mathbf{99.50±0.00} \\\ \text{ECE} \\downarrow& 0.007±0.001 & 0.007±0.001 & \\textbf{0.005±0.001} & 0.008±0.003 & 0.019±0.005 & \\textbf{0.005±0.001} & 0.007±0.001 & 0.006±0.001 & \\textbf{0.005±0.001} \\\ \\text{Accuracy with noisy inputs} \\uparrow & 98.39±0.21 & 98.51±0.11 & \\mathbf{98.58±0.10} & 77.99±0.32 & 94.56±0.66 & \\mathbf{98.58±0.10} & 97.71±0.46 & 98.42±0.09 & \\mathbf{98.58±0.10} \\\ \\hline\\end{array}
CIFAR10.
\\begin{array}{c|ccc|ccc|ccc} \\hline & {} & \\text{Context memory} & {} & {} & \\text{Attention} & {} & {} & \\text{Multimodal aggregation} & {} \\\ {} & \\text{Random} & \\text{FIFO} & \\text{MSE (Ours)} & \\text{Dot Softmax} & \\text{RBF Softmax} & \\text{RBF Sparsemax (Ours)} & \\text{Concat} & \\text{Mean} & \\text{MBA (Ours)} \\\ \\hline \text{Test accuracy} \\uparrow & 74.61±0.22 & 74.82±0.11 & \\mathbf{74.92±0.07} & 68.81±0.62 & 67.65±0.16 & \\mathbf{74.92±0.07} & 74.24±0.27 & 74.72±0.24 & \\mathbf{74.92±0.07} \\\ \text{ECE} \\downarrow& 0.073±0.005 & 0.073±0.006 & \\mathbf{0.011±0.001} & 0.130±0.019 & 0.080±0.001 & \\mathbf{0.011±0.001} & 0.125±0.005 & 0.109±0.003 & \\mathbf{0.011±0.001} \\\ \\text{OOD SVHN} \\uparrow & 0.860±0.003 & 0.862±0.007 & \\mathbf{0.872±0.002} & 0.849±0.009 & 0.864±0.006 & \\mathbf{0.872±0.002} & 0.781±0.016 & 0.803±0.007 & \\mathbf{0.872±0.002} \\\ \\text{OOD CIFAR100} \\uparrow & 0.777±0.002 & 0.778±0.005 & \\mathbf{0.786±0.005} & 0.775±0.005 & 0.771±0.006 & \\mathbf{0.786±0.005} & 0.728±0.004 & 0.742±0.003 & \\mathbf{0.786±0.005} \\\ \\hline\\end{array}
By examining the two tables, it becomes clear that every component has an impact on all metrics. Nevertheless, noticeable differences, particularly in test accuracy, ECE, and robustness to noise, can be observed across different attention types. Notably, when it comes to OOD performance, the multimodal aggregation methods show more substantial differences. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper extends one of the promising uncertainty estimation method: Neural Process from unimodal to multimodal. This is motivated by the fact that: current techniques are predominantly designed for unimodal data, and directly applying them to multimodal information is ineffective. However, this extension poses several challenges: 1) how fuse information from different modalities effectively and efficiently; 2) the context memory of the original design grows in proportion to M - the number of modality, which is memory-consuming, and sets the question of how to maintain it small yet informative; 3) how to find an appropriate length-scale of the Radial Basis Function (RBF) that ensures a tight boundary between in-distribution and out-of-distribution data.
To tackle these challenges, the paper introduces three solutions: 1) Dynamic Context Memory to choose the most informative samples in the context memory, 2) Multimodal Bayesian Aggregation, and 3) Adaptive RBF Attention. Empirical experiments show that the method can outperform other unimodal and multimodal baselines in terms of maintaining good accuracy and calibration at the presence of noise and in terms of out-of-distribution detection.
Strengths: 1. originality: the first to extend Neural Process from unimodal to multimodal and effectively combine existing techniques to solve some of the challenges incurred.
2. clarity: in general, the paper articulates its ideas clearly, even though some parts remain difficult to understand or unclear in terms of details (see the weakness part)
3. significance: the proposed method shows promising results in terms of outperform most unimodal and multimodal baselines.
Weaknesses: **Clarity**: the clarity of the technical introduction (Sec. 3) and the experiment (Sec. 4) parts can be largely improved.
1. The method introduction part could be **challenging to comprehend for readers unfamiliar with the Neural Process and its technical details**. It lacks adequate contextual knowledge, such as an explanation for why the context memory is required, why it is used to store training samples rather than using a context feature. I think the Sec. 2 should be improved to be more comprehensive and general in introducing the Neural Process, to avoid the readers to revisit several referenced papers repeatedly.
1. The **math details can be overwhelming and at times confusin**g. For instance, when reviewing Sec 3.3, it necessitates frequent referral back to Section 3.2. It would be helpful if the semantic meaning of symbols is reintroduced when they appear, preventing the need for readers to revisit previous sections for notation understanding.
1. **Derivation details**: The logic in some derivations is not easy to understand, e.g. Line 221 introduces the function $f(T_X^M)$ without any prior explanation. Also, there is no explanation on how equation (12) is derived, and why softmax is applied on a density function. These issues impede reading and understanding. The formula (4) and formula (6) are not consistent as their parameter notation is not the same on $\omega$ and $\psi$.
2. The MNPs **pseudocode** lacks a high-level introduction and has heavy reference links, failing to provide clarifying information.
3. **Captions**: Some table captions lack sufficient information. For instance, Table 4/5 does not clearly describe the dataset and settings used.
4. **Experimental Setup**: The presentation of the experimental settings, such as how datasets e.g. Caltech101 were modified into multi-view datasets, is not clear.
**Soundness**: Despite the multimodal setting, the experiments use only multi-view image data. The lack of experiments with multimodal combinations such as image + text impedes the proof of the model's effectiveness in various settings.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: see questions listed on the weakness part
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The author have listed one limitation that the updating mechanism is not theoretically guaranteed to obtain the optimal context memory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive feedback and comments. We would like to address the following statements.
> "It lacks adequate contextual knowledge, such as an explanation for why the context memory is required, why it is used to store training samples rather than using a context feature."
We will provide more explanations in the revision as suggested. Here we would like to clarify why context memory is required. NPs require a labelled context set during both training and inference in order to make predictions for a target set. During training, training samples, which are labelled, can be used as the context set. However, during the inference stage, the labelled context set is not given for our task. Thus, a context memory which stores labelled training samples is crucial to enable an efficient and effective inference.
The context memory $\\{ C_X,C_Y \\}$ stores training samples in input space so that when a target sample $T_X$ is given, the distance between $C_X$ and $T_X$ is measured to weight the context elements for generating target-specific context representations in a non-parametric way (i.e., weighted sum of encoded context samples for a specific target sample). It is worth noting that, as we stated in L65, we consider the input space to be a feature space.
> “I think the Sec. 2 should be improved to be more comprehensive and general in introducing the Neural Process, to avoid the readers to revisit several referenced papers repeatedly…It would be helpful if the semantic meaning of symbols is reintroduced when they appear, preventing the need for readers to revisit previous sections for notation understanding.”
We appreciate your suggestion. We will include a general introduction of NPs at Section 2 in the revised manuscript.
> “The logic in some derivations is not easy to understand, e.g. Line 221 introduces the function $f(T^M_X)$ without any prior explanation. Also, there is no explanation on how equation (12) is derived, and why softmax is applied on a density function.”
Thank you for pointing this out. Please let us provide a detailed explanation here. In Gaussian process classification such as [60, 21, 38, 18] where the likelihood is categorical, class probability is obtained in two stages. First, the predictive distribution is acquired in the form of a Gaussian distribution, which is then sampled to squash through the softmax function. Similarly in NPs, the predictive distribution is acquired as a Gaussian distribution $p(T_Y\\vert T_X,C_X,C_Y)$ which is suitable for regression but not for classification. Thus, we introduced a latent function $f(\\cdot)$ to estimate the latent distribution $p(f(T^M_X)|C^M,T^M_X)$ which is then sampled to squash through the softmax function to estimate the class probability.
> “The formula (4) and formula (6) are not consistent as their parameter notation is not the same on $\\omega$ and $\\psi$.”
We would like to clarify that the parameters $\\phi$, $\\psi$, $\\theta$, and $\\omega$ are meant to be different, indicating four distinct encoders. The reasons for using different encoders are that $ r_m^* $ for $\\phi$, $s_m^*$ for $\\psi$, $u^m$ for $\\theta$, and $q^m$ for $\\omega$ are used to approximate different distribution parameters: $ r_m^* $ as samples from $p({r^m_*}\_i \vert z_i)$, $s_m^*$ as diagonal covariance of $p({r^m_*}\_i \vert z_i)$, $u^m$ as mean of prior, and $q^m$ as diagonal covariance of prior.
> “The MNPs pseudocode lacks a high-level introduction and has heavy reference links, failing to provide clarifying information.”
We agree with the reviewer. We will add short comments for each line to make it more interpretable.
> “Captions: Some table captions lack sufficient information. For instance, Table 4/5 does not clearly describe the dataset and settings used.”
We kept captions as concise as possible due to the space constraints. Please refer to Section 5.1, 5.2, and Appendix B for experimental details.
> “Experimental Setup: The presentation of the experimental settings, such as how datasets e.g. Caltech101 were modified into multi-view datasets, is not clear.” “Despite the multimodal setting, the experiments use only multi-view image data. The lack of experiments with multimodal combinations such as image + text impedes the proof of the model's effectiveness in various settings.”
Thank you for bringing this to our attention. In Appendix B.1, we provided an outline of the dataset details but missed information about the types of input modalities and the preprocessing. We will add these to our revised manuscript.
In summary, the datasets consist of a range of input modalities. Handwritten, PIE, HMDB, Scene15, Caltech101, CIFAR10, SVHN, and CIFAR100 are image datasets with diverse features associated with each image. This diversity comes from either different feature extraction methods or augmentation to generate multiple modalities from a single image. Additionally, CUB represents an image+text dataset.
It's important to clarify that we consider multimodality to involve multiple inputs regardless of whether these inputs share the same type (e.g., image+image) or differ in their types (e.g., image+text).
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses to my and other reviewers' questions.
The major concern I have now is about the experiment part. In my perspective, the most important application of multimodal uncertainty estimation lies in **integrating information from different sources (e.g. texts+images, images from different cameras)** to gauge uncertainty. For instance, in the scenario where image data might be highly blurry and the prediction based on the image can be uncertain, but the text data has predictive information, by leveraging both, one can achieve a more accurate measure of uncertainty.
Using feature extraction methods and augmentations to procure multiple inputs fails to capture the real world cases, given they are produced from the same source of information, especially if this feature extraction methods are implemented using different random seeds. If there are many redundant information in every modality, it's challenging to assert that improvements in the uncertainty estimator would generalize effectively across diverse data sources.
Based on this concern, I decide to remain my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
We appreciate your feedback and agree on your comment "For instance, in the scenario where image data might be highly blurry and the prediction based on the image can be uncertain, but the text data has predictive information, by leveraging both, one can achieve a more accurate measure of uncertainty." This is exactly what our method aims to accomplish. We also agree with you on the redundant information of multiple modalities due to the same source of input.
Please note that one of our datasets, CUB, contains both textual descriptions and images. Its experimental results on robustness to noisy samples in section 5.1 demonstrate the desirable property which was mentioned above.
Nevertheless, to make our evaluation even more comprehensive, we are currently conducting additional experiments with a dataset consisting of multiple modalities from different input sources. We will update our response as soon as the experiments are finished.
Kind regards,
The authors | null | null | null | null | null | null |
StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners | Accept (poster) | Summary: The paper investigates the potential of using synthetic images generated by text-to-image models to train self-supervised image embedding models. Methodologically, the authors propose StableRep, a multi-positive contrastive learning method that treats multiple images generated from the same text prompt as positive examples for each other. The paper demonstrates two key findings. First, when the generative model is appropriately configured, self-supervised methods trained on synthetic images can achieve comparable or superior performance to real image counterparts. Secondly, when language supervision is incorporated, synthetic data become more efficient.
Strengths: 1. **Interesting topic:** The paper investigates the potential of using synthetic images generated by text-to-image models as strong visual representation learners. This exploration addresses an important and timely question.
2. **New contrastive learning method:** The proposed StableRep is new and tailored for synthetic data.
3. **Comprehensive evaluation:** The paper evaluates the representations learned by StableRep on large-scale datasets and compares them with strong baselines.
Weaknesses: 1. **Potentially entangled comparison in terms of the amount of data:** The only concern I have for this paper is the claim of data amount. When claiming "synthetic and real images of the same amount", the authors refer to the amount of data for representation learning. However, Stable Diffusion is trained on LAION-2B. Therefore, 50M synthetic images may contain information from 2B images, making this comparison less meaningful. I'd like to see more discussion on this issue.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Limitations are included in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for considering our work as interesting and novel! The concern you raised is a great question, and we will try to answer it from two aspects:
1. While the dataset distillation hypothesis is unavoidable, we have a different interpretation from the "distribution" perspective. Suppose there is an underlying distribution of all natural images, e.g., every single scene on this planet projected into an image via a pinhole camera. We call such underlying distribution $D$. LAION-2B is just a set of 2B images sampled from $D$, and we used LAION-2B to estimate this underlying distribution and then yielded a stable diffusion model, which can be viewed as a "parameterized distribution" $D_{sd}$ that approximate $D$. In theory, $D$ is still much more powerful than $D_{sd}$. Then if we are about to sample some random training images fro representation learning from either $D$ or $D_{sd}$, which one should I pick? One may choose $D$ because it's theoretically more powerful, but we argue $D_{sd}$ may be a good (or even better) option, why? Because $D_{sd}$ allows us to have more fine-grained control over how we sample those images: (1) guidance scale allows us to trade off between image quality and diversity so we can tune it, but for $D$ it's very expensive or even infeasible to tune it -- once the data is collected it's collected; (2) the stochasticity of diffusion models allows us to sample multiple images that correspond to the same underlying semantics described by an caption, but doing so with $D$ is almost infeasible; (3) and potentially others. So in summary, $D_{sd}$ is a weaker distribution and randomly sampling from it may actually be disadvantageous, rather than than advantageous implied by the distillation hypothesis. But through the control of (1) and (2), we can sample from $D_{sd}$ in a more efficient way than $D$ allows us to do.
2. Practically, we conduct an surrogate sanity check to get a rough sense of how much information the synthesized training data contains about the downstream task. Specifically, we compute how close the downstream datasets are to the StableRep's synthetic training set. Concretely, for an image $I$ in a downstream dataset, we find its closest image in StableRep's training set by using cosine similarity of some pre-trained SimCLR feature extractor $f$ (Recall that SimCLR is optimized to maximize cosine similarity of the same image). We define this highest similarity (noted as $s$) as this image's similarity with StableRep's training set $D$. Concretely, $s_i = \max_{X \in D} cos(f(I_i), f(X))$. We evaluate four datasets, and provide the statistics (avg, min, and max) of this highest similarity metric of each image.
| Train Set | stats | aircraft|cars|flowers|food101|
|--|:--:|:--:|:--:|:--:|:--:|
| StableRep Train | avg | 0.891 | 0.836 | 0.836 | 0.791 |
| | min | 0.652 | 0.258 | 0.523 | 0.443 |
| | max | 0.985 | 0.980 | 0.967 | 0.958 |
| ImageNet Train | avg | 0.901 | 0.870 | 0.823 | 0.807 |
| | min | 0.700 | 0.629 | 0.586 | 0.560 |
| | max | 0.980 | 0.998 | 0.988 | 0.977 |
As a comparison, we also include the distance to ImageNet training set as a baseline. We observe that:
- generally downstream datasets are closer (by avg metric) to ImageNet training set than our StableRep train set, except for flowers.
- there is no value surpass 0.99 for StableRep train set, while there is 0.998 for ImageNet train set.
From this perspective, we may conjecture that the ImageNet train set contains more information about the downstream dataset than the StableRep dataset. However, the StableRep still generalizes better than methods trained on real ImageNet train set, as illustrated in question 1 of the "global rebuttal" shared to all viewers.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The first argument can be further strengthened by an experimental setting where collecting 50M high-quality images is almost infeasible for $D$ but easy for $D_{sd}$, but the proof-of-concept experiment in the paper is interesting enough to me. Overall, I think this paper is very interesting to the generative model and representation learning communities, and I'd like to raise my score to 8.
---
Reply to Comment 1.1.1:
Title: Thank you and some update
Comment: Dear Reviewer FSjQ,
Thank you for reading our arguments and proposing this new setting!
We have also obtained some new results after the rebuttal deadline, which may be interesting to you. So we want to share with you a summarization of all additional results during this period, which can be found [here](https://openreview.net/forum?id=xpjsOQtKqx¬eId=F3K5WcycnW).
We would be very happy if they can address any of your remained concerns, or enhance your evaluation about our work, or simply just let you feel our work is more interesting. If you have any unsolved/new concerns, feedbacks, or comments stimulated by these additional results, please do let us know. | Summary: In this paper, the authors investigate the potential of learning visual representations using synthetic data generated by text-to-image models. The authors choose Stable Diffusion for exploration and extensive experiments demonstrate that self-supervised models trained on synthetic data can perform better or at par with training on real data. The authors also propose a multi-positive contrastive variant called StableRep to allow multiple images generated from the same text prompt as positives for each other.
Strengths: - The paper is well motivated. Given the recent progress of large-scale text-to-image generative models like Stable Diffusion, it is timing to investigate the effectiveness of these models in generating high-quality images to assist discriminative tasks.
- The paper is generally well-written and easy to follow.
- The experiments are extensive and the results seem promising.
Weaknesses: - The authors use large-scale image-text datasets like CC3M, CC12M, RedCaps for study. These datasets have well-collected image captions. Although the authors claim that the proposed method can reduce the reliance on collecting large-scale real images for learning representations, the cost of collecting these image captions is also somewhat expensive and should not be ignored. Thus, a more cost-effective and interesting setting should be to generate a synthetic ImageNet dataset using category labels as text prompts (e.g., a photo of [category]) since it does not require any human effects of collecting captions. Then self-supervised models could be pre-trained on the real ImageNet dataset as well as the synthetic ImageNet dataset for comparisons. I am curious about whether the self-supervised methods trained on synthetic ImageNet still have advantages over training on real ImageNet.
- It seems that the synthetic data only have performance advantages for linear probing experiments. For few-shot experiments, self-supervised methods trained on synthetic data still have a large performance gap with training on real data, which limits the practical applications of using synthetic data to some extent.
- The authors only evaluate the proposed method on classification downstream tasks. What about dense prediction downstream tasks like object detection or semantic segmentation? Will the synthetic data still have the advantages on these kinds of downstream tasks?
- According to ablations in Sec. 4.2, generating multiple images per caption yields better performance. However, this may be due to simply increasing the number of views. To eliminate the interference of the number of views, the authors should also conduct an ablation by replacing multiple generated images per caption with the same number of cropped views per real image.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am concerned about the questions mentioned above. Given the current status of the paper, I am leaning towards borderline accept and hope the authors could address my concerns.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations and broader impacts in Sec. 7 (main text) and Sec. D (supplementary material) in detail, which look good to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing valuable feedback. Below we try to address your concerns:
> Thus, a more cost-effective and interesting setting should be to generate a synthetic ImageNet dataset using category labels as text prompts (e.g., a photo of [category])...I am curious about whether the self-supervised methods trained on synthetic ImageNet still have advantages over training on real ImageNet.
This is a good suggestion. We have conducted the experiments and include the results in question 1 of the "global rebuttal" shared to all reviewers (it is located around the top of this page).
In terms of Synthetic ImageNet v.s. Real ImageNet, we observe that:
- When conduct supervised cross-entropy training, Real data significantly outperforms Synthetic data on either downstream linear classification or few-shot evaluation.
- When conduct self-supervised learning, i.e., SimCLR, Synthetic ImageNet is comparable to (or just slightly worse than) Real ImageNet on downstream linear probing benchmarks. Generally, different prompting strategies result in different performance, and this indicates the importance of designing prompts.
We also observe that, StableRep consistently outperforms both supervised training and SimCLR with synthetic ImageNet, demonstrating the effectiveness of our pipeline. While collecting image captions can be more expensive than using labels, we argue that we can explore large language models to help us synthesize these captions, which may becoming increasingly accessible and cheaper in the near future.
> It seems that the synthetic data only have performance advantages for linear probing experiments. For few-shot experiments, self-supervised methods trained on synthetic data still have a large performance gap with training on real data, which limits the practical applications of using synthetic data to some extent.
This is not completely true.
Firstly, our StableRep outperforms training with real data on 8 out of the 10 few-shot benchmarks, and the average accuracy (over all 10 datasets) only lags behind by 0.8%.
Secondly, the impression of underperfomance (for synthetic training) indeed comes from cifar-10/100. We have identified this as image resolution issues (please check our experiments in question 2 of the "global rebuttal"). Intuitively, the real training images contain low-resolution and blurring images, which help the model generalize to extremely low resolutions, such as 32x32 for cifar. In contrast, the synthetic data only contains high resolution images. A potential fix could be randomly downsampling images when training models on synthetic data.
> The authors only evaluate the proposed method on classification downstream tasks. What about dense prediction downstream tasks like object detection or semantic segmentation?
This is a great suggestion, and we have evaluated the quality of representation on ADE20k semantic segmentation dataset. Specifically, we freeze the backbone, and train the decoder part of an UperNet [a]. We used the default parameters in the mmsegmentation library [b]. The results are as below:
| method | pre-train data | mean IoU (%) | pixel acc. (%)|
|--|:--:|:--:|:--:|
| clip | real, cc12m | 33.3 | 75.9 |
| simclr | real, cc12m | 35.2 | 77.6 |
| simclr | syn, cc12m | 33.3 | 76.0 |
| StableRep | syn, cc12m | **38.0** | **78.4** |
| StableRep | syn, RedCaps | 37.1 | 77.8 |
We observe that our StableRep consistently outperforms all other methods.
> According to ablations in Sec. 4.2, generating multiple images per caption yields better performance. However, this may be due to simply increasing the number of views. To eliminate the interference of the number of views, the authors should also conduct an ablation by replacing multiple generated images per caption with the same number of cropped views per real image.
This is a good suggestion for ablation study. To keep the comparison with StableRep sensible, we keep the total amount of crops per-batch as 8192, and vary the number of crops per image. In other words, if we use k views per image for SimCLR, then we only sample 8192 / k images per batch. Note that this setup is different from the SwAV paper [c] (which is the origin of the idea of > 2 views per image), where they actually increased the total number of crops per batch. The comparison between SimCLR and StableRep is:
| method | pre-train data | 2 view | 4 view | 6 view |
|--|:--:|:--:|:--:|:--:|
|SimCLR| Real | 59.5 | 60.0 | 59.3 |
|StableRep| Syn | **68.7** | **69.6** | **69.6** |
Another ablation is to compare Synthetic SimCLR and StableRep with 2 views. Both are using synthetic images and two views. The difference is that the two views of synthetic SimCLR come from the same image, while the two views of StableRep come from two different images. The comparison shown below clearly demonstrates that most of the improvement comes from using different images of the same caption, rather than just more views.
| method | pre-train data | 2 view |
|--|:--:|:--:|
|SimCLR | Syn | 60.5 |
|StableRep| Syn | **68.7** |
Please don’t hesitate to let us know if you have more comments or questions. We respectfully hope that you can consider raising the score if our response could address existing concerns.
[a] Unified Perceptual Parsing for Scene Understanding
[b] MMSegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark
[c] Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the authors' detailed response. Most of my concerns have been well addressed. I have one additional question. For the Syn+Real experiments provided in the rebuttal (reviewer HH2w, AnD7), could the authors explain why SimCLR Syn+Real performs worse than SimCLR Real on **few-shot avg**?
---
Reply to Comment 1.1.1:
Title: Further response
Comment: Dear Reviewer DyTC,
Thank you for your response! This is a good catch, we have checked the results of “Syn+Real” v.s. “Real” SimCLR in the few-shot setting. “Syn+Real” outperforms (or at least on par with) “Real” on all datasets but CIFAR-10/100. On CFAIR-10/100, “Syn+Real” lags significantly behind “Real” (see below), making the overall average accuracy lower. We conjecture this is because of the resolution issue, which we solved for StableRep (check here). So now we are applying the same fix to “Syn+Real” and will update you once the results are available.
You may have noticed that, differently, “Syn+Real” slightly outperforms “Real” on average accuracy for the few-shot setup. This is because for CIFAR-10/100, “Syn+Real” is on par with “Real”. In terms of why there is such a difference between SimCLR and CLIP behavior, we conjecture it may be because CLIP is a stronger learner so it better captures the low-resolution bias in the real part than SimCLR.
For your reference, the current few-shot results (without resolution fix) for SimCLR and CLIP on CIFAR-10/100 are as below:
| method | pre-train data | cifar-10 | cifar-100 |
|--|:--:|:--:|:--:|
|SimCLR| real | 64.0 | 70.4 |
|SimCLR| syn + real | 52.3 | 58.1 |
|CLIP | real | 77.5 | 82.1 |
|CLIP. | syn + real | 77.5 | 81.6 | | Summary: The authors presents a novel method for learning visual representations using synthetic data. The authors leverage text-to-image generative models (Stable Diffusion) to synthesize images from textual prompts, which are then used to train a self-supervised visual representation model. The synthetic data generation process is guided by a set of diverse and non-repetitive textual prompts, which helps in creating a wide variety of images. The authors demonstrate that their approach outperforms traditional self-supervised learning methods that rely on real images, especially when the amount of real data is limited. They also propose a multiple positive samples based contrastive learning approach called StableRep to utilize different synthetic images generated from a source text prompt, which outperforms training with simply real or synthetic images on a variety of downstream tasks.
Strengths: 1. **Important Problem:** The authors study a very important problem, how to utilize synthetic images for training vision and vision-language models. They also show success at this central problem (with caveats, see weaknesses). They are one of the first studies showing successful use of synthetic data from generative models on standard computer vision benchmarks.
2. **Novel approach:** The multi-positive StableRep approach proposed by the authors for multi-caption invariance learning in self-supervised models is a novel idea and seems to work very well in practice leading to significant improvements over simply using synthetic images for training.
3. **Ablations:** The authors perform extensive ablations on the role of guidance scale, and also include a study on how additional language supervision could be used to increase caption efficiency while training with multiple positive synthetic images. The results indicate that proposed approaches (StableRep and StableRep+) indeed are more efficient versus training on real images.
Weaknesses: 1. **Limited motivation behind why synthetic images:** It is unclear to me as a reader why Synthetic Images are being used in Section 2.2 as training data for pre-training. It seems that there is no real benefit from using them on state-of-the-art contrastive models like BYOL and MoCo-V3 in terms of linear accuracy. Plus, the key issue for training is the lack of real world labelled data as mentioned by the authors' central question "how can we collect such large amounts of varied data to train AI models?" (L21). There is plenty of real-world unlabelled data that is available and can be used to train self-supervised models. The real benefit that synthetic images present is unlimited labelled data. However no fine-tuning comparisons or supervised learning comparisons are provided except for fine-tuning MAE, which could very well be explained by the stochasticity in model training (82.9% vs 82.6%). Section 2.2 does not do much to justify the usage of synthetic images for training in a manner that addresses the authors' central question, and follow up sections justify it for efficiency but not performance.
2. **Limits of generative model:** While the authors include a limitation section, and mention briefly the issues that affect generative models like Stable Diffusion, there needs to be more rigorous evaluation of these issues. Conceptually, these methods suffer from issues like limited compositional understanding. There have been countless studies on the social biases reflected in these models. Very lately (after the NeurIPS deadline), there was an important work showing that models trained on data from generative models can suffer from model collapse, where tails of the original data distribution disappear from the subsequent trained model [1]. All these issues are quite major and should be part of this study, since it is one of the first to utilize synthetic data and show some improvements in results and could potentially effect future research in this area. In particular, the authors should include results on compositional benchmarks (like ARO, CREPE, Winoground, SugarCREPE etc) for their CLIP models trained with StableRep, discuss the fairness of the self-supervised models such as worst-class accuracy and geographic bias etc.
### References
1. Shumailov I, Shumaylov Z, Zhao Y, Gal Y, Papernot N, Anderson R. The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv preprint arXiv:2305.17493. 2023 May 27.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: My main concern is with the motivation behind using synthetic images as well as the limited commentary and no evaluation of the downstream effects of using data from generative models for training. I would be willing to improve my rating if the authors address these issues (as discussed in weaknesses).
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors have addressed the limitations of their work in Section 7, but it could use further commentary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing our work as novel and acknowledging that we study a very important problem.
> Limited motivation behind why synthetic images: It is unclear to me as a reader why Synthetic Images are being used in Section 2.2 as training data for pre-training. It seems that there is no real benefit from using them on state-of-the-art contrastive models like BYOL and MoCo-V3 in terms of linear accuracy
The whole point of Section 2.2 is to analyze and study how good synthetic images are for self-supervised training, thus is **not** about chasing numbers. It's actually quite surprising to find that using synthetic images are on par at (or often better than) real images. We have improved the training for MoCo v3, and now the results are even closer between real and synthetic images (see below). This is an surprising finding, not to mention that synthetic images are much better than real images for MAE for linear probing.
| | data | cc3m | cc12m|
|--|:--:|:--:|--:|
| MoCo v3 | real | 64.8 | 66.6 |
| | syn | 64.3 | 66.4 |
| BYOL | real | 64.0 | 65.9 |
| | syn | 64.2 | 65.6 |
Learning from synthetic data is quite and interesting topic, and there actually have been a whole line of work exploring synthetic images for training (such as [3],[24],[58],[31]). One advantage of using synthetic data is that we have multiple ways of freedom to control the way we generate the data. Sec 2.2 shows one freedom the guidance scale, while Sec 2.3 shows another freedom which is leveraging the stochasticity to generate multiple images per caption. We hope our work can inspire future exploration in this direction.
> no supervised learning comparisons
We have include supervised learning comparisons in the question 1 of the "global rebuttal" shared to all reviewers. Feel free to let us know if you mean something else.
> no fine-tuning comparisons
We add additional ImageNet fine-tuning results for SimCLR:
| | Pre-train data | ImageNet fine-tuning |
|--|:--:|:--:|
| SimCLR | Real | 81.8|
| | Syn | **82.2**|
We tried Stable Diffusion 2.1 for synthetic images, and SimCLR and MAE with SD 2.0 achieved 82.1 and 83.0 ImageNet fine-tuning, both higher than pre-train on the real images (81.8 for SimCLR and 82.6 for MAE)
> which could very well be explained by the stochasticity in model training (82.9% vs 82.6%)
Not really, we fine-tuned multiple times and the averaged results remain the same. The performance gap in fine-tuning is usually small, please refer to Table 3 in the MAE paper to see that MAE also only beat MoCo v3 and BEiT by 0.4%.
> Section 2.2 does not do much to justify the usage of synthetic images for training in a manner that addresses the authors' central question, and follow up sections justify it for efficiency but not performance.
We believe we have justified the performance in our paper (acknowledged by other reviewers too). Also see question 1 in "global rebuttal".
> All these issues are quite major and should be part of this study, since it is one of the first to utilize synthetic data and show some improvements in results and could potentially effect future research in this area. In particular, the authors should include results on compositional benchmarks (like ARO, CREPE, Winoground, SugarCREPE etc) for their CLIP models trained with StableRep, discuss the fairness of the self-supervised models such as worst-class accuracy and geographic bias etc.
This is a great suggestion! We tried ARO benchmark for compositionality test, and FairFace for evaluating representation fairness. We generally found that training on Synthetic data is able to improve the performance on both compositionality and fairness.
For ARO (compositionality):
| |pre-train data| Relation accuracy |
|--|:--:|:--:|
| CLIP | Real | 46.4 |
| | Syn | **50.0** |
| StableRep+ | Syn | 47.3 |
For FairFace (fairness):
| |pre-train data| mean accuracy | best-class accuracy | worst-class accuracy |
|--|:--:|:--:|:--:|:--:|
| CLIP | Real | 28.2 | 60.2 | 0.3 |
| | Syn | 30.4 | 64.0 | 3.1 |
| StableRep+ | Syn | **37.2** | **74.9** | **10.0** |
CLIP w/ real data only achieved 0.3% accuracy with "southeast asian male" class, and CLIP w/ synthetic data improves this class to 3.1%, while our StableRep+ improves this class to 27.2%. The worst class for StableRep+ is "middleastern male", which CLIP w/ real and w/ synthetic struggle too (they achieved 6.9% and 6.2%). This suggests a geographic bias and StableRep+ generally improves the fairness.
Please don’t hesitate to let us know for any additional comments on the paper or the ARO or FairFace experiments.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for taking the time to write a thorough rebuttal, and also running the additional experiments based on my suggestions. The new results indeed solidify the utility of StableRep, and I am happy to say my concerns have largely been addressed. I have increased my rating from a Weak Accept to an Accept.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer qM14,
Thank you for reading our response! We are happy to see it largely addressed your concerns. For any further discussions/experiments/clarifications that you think are able to address your remaining concerns, please feel free to let us know!
---
Reply to Comment 1.1.2:
Title: Update of additional (interesting) results
Comment: Dear Reviewer qM14,
Thank you again for your engagement in the discussion!
We have presented various results in the rebuttal to other reviewers in addition to you, and obtained some new results after the rebuttal deadline. We believe some of them might be interesting to you, so we want to share with you a summarization, which can be found [here](https://openreview.net/forum?id=xpjsOQtKqx¬eId=F3K5WcycnW).
We would be very happy if they can address any of your remained concerns, or enhance your evaluation about our work, or simply just let you feel our work is more interesting. Meanwhile, should you have any unsolved/new concerns or feedbacks stimulated by these additional results, please don't hesitate to let us know. | Summary: This paper investigates how synthetic data generated with the text-to-image diffusion model Stable Diffusion can be leveraged for representation learning.
To this end, the paper analyzes established representation learning approaches such as SimCLR and CLIP, but trained on the synthetically generated data. Further, it introduces a novel method,
dubbed "StableRep", specifically designed for representation learning from a generative model, to generate multiple "positive" image instances for downstream representation learning.
Experiments demonstrate the potential of synthetic data for representation learning and mostly perform on-par with or better than when trained on real data only.
Strengths: This is a well-written paper, which approaches a timely research question: To what extent can large-scale, generative models which have been trained on internet-scale data be leveraged for other tasks, by augmenting or replacing real data. The experiments are well-designed and hint at the potential of using synthetic data. StableRep is a nice method specificially designed for generative models that can generate multiple positive examples for a given prompt. The experiments are encouraging for future research in this area.
Weaknesses: While experiments demonstrate potential, there remain a few unaddressed points: How does the choice of generative model influence the results for representation learning? In particular, Stable Diffusion is conditioned on CLIP text features, which themselves are pretrained through CLIP's contrastive objective. There exist, however, other generative text-to-image variants that use non-contrastive features for conditioning, such as T5-representations (e.g., Imagen [https://github.com/deep-floyd/IF], IF [https://github.com/deep-floyd/IF]).
Further, the paper does not discuss how the size of the pretraining dataset for both CLIP (for conditioning SD) and the training data for SD change performance of StableRep (the pretraining data is much larger than CC3M/12M). In addition, the possibility of data poisoning w.r.t to the results presented in Tab. 2 and Tab. 3 is not discussed (for example, SD's training data might have contained examples from the linear probing datasets). The performance gap between CIFAR-10/100 and the other datasets remains unexplained.
l. 1014-107 need more detail on hyperparameter settings like number of steps, learning rate used, convergence of the model.
For further comments, please refer to the "Questions" section.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Can we mix synthetic and real data? How does this affect the outcome of training approaches like CLIP and SimCLR or StableRep?
- Other sampling hyperparameters than guidance scale w are not discussed. How does the choice of solver change the results? Same question for the number of denoising steps / function evaluations...
- small note: In Eq. 1, t could be misinterpreted as timestep variable, as often used in the context of diffusion models.
- Sec. 4, l. 229-233: Do the instabilities for larger models only occur for synthetic data?
- Sec. 5, l. 240-245: This section speculates about the inferior results when training CLIP on synthetic data only (vs real data only). Why not plot the imagenet zero-shot accuracy vs clip-image similarity for different guidance scales? Using a better model than SD 1.5 might also help clarify this point.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our method is nice and that our experiments are encouraging, and we appreciate the constructive feedback. We address your concerns or questions one by one as below.
> How does the choice of generative model influence the results for representation learning?
This is a good point, and we provide the results of using DeepFloyd-IF instead. Given that DeepFloyd-IF is slow and rebuttal time is limited, we: (1) only used its stage 1 & 2 models; (2) never tuned its hyper-perameter. We train StableRep with 2 views and and present the results as below:
| method | ImageNet LP |
|--|:--:|
| StableRep w/ SD | 65.8 |
| StableRep w/ IF | 65.5 |
This suggests StableRep also works with models that are *not* conditioned on CLIP features.
> Further, the paper does not discuss how the size of the pretraining dataset for both CLIP (for conditioning SD) and the training data for SD change performance of StableRep. In addition, the possibility of data poisoning w.r.t to the results presented in Tab. 2 and Tab. 3 is not discussed.
Unfortunately, we cannot train CLIP and Stable Diffusion (SD) with various different data sizes to ablate this. In terms of poisoning we perform an surrogate sanity check: we compute how close the downstream datasets are to the StableRep's training set. Specifically, for an image $I$ in a downstream dataset, we find its closest image in StableRep's training set by using consine similarity of some pre-trained SimCLR feature extractor $f$ (Recall that SimCLR is optimized to maximize cosine similarity of the same image). We define this highest similarity ($s$) as this image's similarity with StableRep's training set $D$. Concretely, $s_i = \max_{X \in D} cos(f(I_i), f(X))$. We evaluate four datasets, and provide the statistics (avg, min, and max) of this highest similarity metric of each image.
| Train Set | stats | aircraft|cars|flowers|food101|
|--|:--:|:--:|:--:|:--:|:--:|
| StableRep Train | avg | 0.891 | 0.836 | 0.836 | 0.791 |
|| min | 0.652 | 0.258 | 0.523 | 0.443 |
|| max | 0.985 | 0.980 | 0.967 | 0.958 |
| ImageNet Train | avg | 0.901 | 0.870 | 0.823 | 0.807 |
|| min | 0.700 | 0.629 | 0.586 | 0.560 |
|| max | 0.980 | 0.998 | 0.988 | 0.977 |
The results show downstream datasets are closer to ImageNet (which has been widely used for pre-training and then evaluating on these downstream datasets) than to StableRep's train set, therefore we believe the training set of StableRep is not poisoned.
Please refer to our response to reviewer FSjQ for more relevant discussions.
> The performance gap between CIFAR-10/100 and the other datasets remains unexplained
We have identified this is because CIFAR-10/100 have very low resolution images. More detailed results and explanations are provided in the question 2 of the "global rebuttal" (please find it around the top of this page).
> Can we mix synthetic and real data?
Yes.
For SimCLR:
| method | pre-train data | cifar-10 | cfiar-100 | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|:--:|
| SimCLR | Real | 88.3 | 70.3 | 61.5 | 72.3 | **73.0** |
| SimCLR | Syn | 84.8 | 65.2 | 63.7 | 72.7 | 70.8 |
| SimCLR | Syn+Real | **88.7** | **71.9** | **64.1** | **74.9** | 71.9 |
For CLIP:
| method | pre-train data | cifar-10 | cfiar-100 | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|:--:|
| CLIP | Real | **94.0** | 79.0 | 70.3 | 81.2 | 86.7 |
| CLIP | Syn | 87.3 | 69.5 | 67.8 | 79.1 | 83.7 |
| CLIP | Syn+Real | 93.9 | **80.2** | **73.3** | **83.0** | **87.3** |
For StableRep (StableRep only trained with 2 views and 15 epochs):
| method | pre-train data | cifar-10 | cfiar-100 | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|:--:|
| StableRep (2 views) | Syn | 88.5 | 71.5 | 67.9| 78.0 | 81.4 |
| StableRep (2 views) | Syn+Real | **92.2** | **76.1** | **69.1** | **79.3** | **84.2** |
Combining synthetic and real datasets uniformly improve all methods
> -Other sampling hyperparameters than guidance scale w
This is a good suggestion, and the comparison are as below:
| method | solver | step | ImageNet LP|
|--|:--:|:--:|:--:|
| baseline | DDIM | 50 | 65.8 |
|| DDIM | 25 | 64.9 |
|| DDIM | 100 | 65.8 |
|| DPM | 50 | 66.0 |
Halving the sample steps brings about 0.9% drop, while doubling the steps has no effect. Switching from the ODE solver DDIM to SDE solver DPM slightly improves over the baseline by 0.2%.
> Do the instabilities for larger models only occur for synthetic data?
No. Large vision transformers are notoriously unstable during training in general. This paper [a] has more discussion about it.
> This section speculates about the inferior results when training CLIP on synthetic data only (vs real data only). Why not plot the imagenet zero-shot accuracy vs clip-image similarity for different guidance scales?
We worry that plot w/ clip score can be misleading. The performance of CLIP trained with synthetic images drops when guidance scale increases, and meanwhile clip score increases with guidance scale. Putting togther, it becomes that the higher the clip score, the lower the CLIP performance. This is misleading and only part of the story. We know that guidance scale trades off between data diversity and image-text alignment. While image-text alignment increases with guidance scale, data diversity level reduces a lot. As a result, the disadvantage brought by diversity dropping outweighs the advantage brought by better image-text alignment.
We are more than happy to discuss more if you have further questions. We hope our response will encourage the reviewer to consider increase the rating if they have addressed your concern.
[a] Scaling Vision Transformers to 22 Billion Parameters
---
Rebuttal 2:
Title: Any further questions/concerns are more than welcome
Comment: Dear Reviewer AnD7,
We would like to thank the reviewer again for your time and effort. Except for rebuttal in this thread, we have made a summarization of new (and maybe interesting) results, check [here](https://openreview.net/forum?id=xpjsOQtKqx¬eId=F3K5WcycnW). We believe this evidence, together with our initial rebuttal should be able to address most of your concerns (if not all).
Given there is only **< 3 days left** in the discussion period, we want to let you know that we would love to take any additional questions/comments. Meanwhile, if the concerns of the reviewer are clarified and the reviewer is convinced of the novelty and completeness of our work, we'd be grateful if the reviewer could update your review to reflect that. Once again, many thanks for your time and dedication to the review process, we are extremely thankful!
---
Rebuttal Comment 2.1:
Title: Great Rebuttal
Comment: Dear Authors,
Thank you for the work you put into this response and the general response. I think this is a strong rebuttal that answers most of my questions. I would like to see this work at NeurIPS 2023 and raise my score to 7.
---
Reply to Comment 2.1.1:
Title: Thank you
Comment: Dear Reviewer AnD7,
We appreciate your positive feedback. We are truly delighted to see that our response has addressed your questions.
Thank you once again for dedicating your time and effort to reviewing our work and providing us with insightful suggestions! | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful comments and feedback! We are glad to see the reviewers found:
1. the paper is well-written and easy to follow (by reviewer HH2w, AnD7, DyTC)
2. this paper approaches an interesting timely research question / very important problem (by AnD7, DyTC, FSjQ, qM14).
3. the approach is novel (by qM14, FSjQ) and nice (by AnD7).
4. this paper shows well-designed experiments / extensive ablation study / comprehensive evaluation / encouraging and promising results (by all reviewers).
We provide answers to shared questions/concerns here (and respond to remaining individual questions to each reviewer separately):
***1. directly use labels (e.g. from ImageNet) to synthesize image?***
We acknowledge that the challenging part of synthesizing with labels is how to design the text prompts. We include four variants here:
- **Language Enhancement (LE)** proposed by [24]: it generates 200 prompts for each ImageNet class.
- **c, hc, in place** proposed by [a]: it composes ImageNet class name *c* with the hypernyms of the class *hc* and Places class name *place*.
- **c, dc, in place** proposed by [a]: it composes ImageNet class name *c* with the definition of the class *dc* and Places class name *place*.
- **a photo of [category]** proposed by us (also suggested by Reviewer DyTC).
For each of above items, we perform both supervised cross-entropy (xent) training and SimCLR training. We compare them with our StableRep, as well as training on the Real ImageNet dataset. All methods are trained for roughly 300 ImageNet-equivalent epochs for a fair comparison. We report: (1) linear probing accuracy on ImageNet (ImageNet LP); (2) the average accuracy of the linear transfer on 12 downstream datasets; (3) the average accuracy of few-shot evaluation on 11 datasets. Results are as follows:
| | pre-train data | method | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|
| Supervised | Real ImageNet | xent | - | 78.6 | 85.5 |
| Unsupervised | Real ImageNet | SimCLR | **74.3** | 74.3 | 77.8 |
| LE [24] | Synthetic | xent | 64.7 | 71.5 | 77.5 |
| | Synthetic | SimCLR | 66.8 |74.0 | 71.9 |
| c, hc, in place | Synthetic | xent | 67.1 |73.0 | 77.2 |
| | Synthetic | SimCLR | 66.1 | 73.2 | 71.1 |
| c, dc, in place | Synthetic | xent | 68.6 | 73.2 | 77.2 |
| | Synthetic | SimCLR | 65.8 | 73.2 | 69.2 |
| a photo of c | Synthetic | xent | 67.9 | 72.6 | 76.5 |
| | Synthetic | simclr | 66.6 | 74.1 | 71.3 |
| StableRep | Synthetic | StableRep |72.8 | **82.2** | **85.9** |
We observe that:
- Our StableRep clearly outperforms all models trained on synthetic imagenet (even on ImageNet linear probing despite synthetic ImageNet explicitly leverages ImageNet labels), and even outperforms supervised/unsupervised training using Real ImageNet on both downstream transfer and few-shot evaluation.
- StableRep is only inferior to SimCLR trained with ImageNet on the ImageNet linear probing benchmark, mostly because StableRep here hasn't converged yet. In fact, our convergent StableRep with longer training can reach 75.7 (using cc12m captions) or 76.7 (using RedCaps captions). In contrast, SimCLR with real ImageNet does not improve much with longer training, indicating the item here is already convergent.
***2. StableRep (or training with synthetic data) underperforms training with real data on cifar-10/100 evalution, while outperforms on the other evaluation datasets ?***
We have identified this issue. This is due to the image resolution: cifar-10/100 has very low resolution image (32x32 pixels), while other evaluation datasets have much higher resolutions. To test this hypothesis, we also downsampled the other 9 evaluation datasets to 32x32 pixels before we evaluate. The results for our StableRep and CLIP with real data are as below:
| method | downsample? | aircraft | cars| dtd| flowers| pets| sun397| caltech101 | food101 | voc2007| Average |
|--|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| real CLIP | | 53.2| 75.8 | 75.7 | 96.0 | 86.7 | 72.5 | 92.7 | 81.6 | 86.1 | **80.0** |
| real CLIP | ✅ | 10.2| 35.7 | 48.7 | 71.6 | 70.4 | 58.5 | 77.9 | 66.8 | 83.5 | **58.1** |
| Stablerep | | 57.6| 80.3 | 79.0 | 96.7 | 87.1 | 73.2 | 94.0 | 83.5 | 87.2 | **82.1** |
| Stablerep | ✅ | 5.2 | 24.3 | 53.3 | 53.2 | 65.4 | 46.2 | 70.7 | 52.0 | 83.3 | **50.4** |
While StableRep outperforms Real CLIP on all of these 9 tasks before downsampling, it underperforms Real CLIP on 8 out 9 tasks (sometimes significantly) after the downsampling. This can be explained by the training data. While Stable Diffusion only synthesizes high resolution images for StableRep training, the real image set often contain many low-resolution or blurred images which in turn help real CLIP's transfer on data of very low resolutions, e.g., 32x32 for cifar-10/100. A fix may be to randomly downsample the synthetic image to very low resolution during StableRep training, which we leave as future work given the limited rebuttal time.
Please don’t hesitate to let us know for any additional feedback. Thanks!
[a] Fake it till you make it: Learning transferable representations from synthetic ImageNet | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies pre-training by using generated image from diffusion models. It presents StableRep that generate different images with the same caption by using stable diffusion models. The model is hence pre-trained by using the generated samples and contrastive loss. Extensive experiments demonstrate the effectiveness of the proposed method by using synthetic images only.
Strengths: 1. This paper studies the pre-training by using synthetic data only. Experimental results show that pre-training by using synthetic data only outperforms the pre-training over real images.
2. This paper is generally clear and well-written.
Weaknesses: The pre-training over generated images by using diffusion models have been studied in [24]. In [24], it seems direct pre-training on synthetic images could helps to improve the classification results without the proposed pre-training pipeline. As the comparison between the proposed method and the reported results in [24] is missing, it is difficult the determine the effectiveness of the proposed pre-training pipeline.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. For general vision-language pre-training methods, image labels are usually very difficult to obtain. This paper studies the use of synthetic data, where the labels of synthetic images can be obtained during image synthesis. As such, we can simply pre-train the model by generating images together with the labels in the task-specific label spaces, which could improve the model performances [24]. As such, it is not clear why the contrastive pre-training is necessary or better when using synthetic data for pre-training. More discussion and experiments are expected.
2. According to the experimental results, pre-training on synthetic data helps to improve the performances on most of the datasets while may introduce degradation some of the datasets (e.g. CIFAR-10). How about combining the synthetic and real datasets together for pre-training?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review. In the following we address your concern in detail.
> the comparison between the proposed method and the reported results in [24] is missing
This is a reasonable comparison, which we have included in the question 1 of "global rebuttal" shared to all reviewers (please find it around the top of the page).
Our StableRep clearly outperforms [24] (and additionally [a]), and even outperforms supervised training using Real ImageNet on both downstream transfer and few-shot evaluation.
> As such, we can simply pre-train the model by generating images together with the labels in the task-specific label spaces, which could improve the model performances [24]. As such, it is not clear why the contrastive pre-training is necessary or better when using synthetic data for pre-training. More discussion and experiments are expected.
Firstly, as shown in aforementioned experiments, our StableRep has higher accuracy than [24], despite that we do not specifically use ImageNet labels.
Secondly, using target label space for synthesis means we have to know the downstream task beforehand. This means we have to synthesize a different dataset and train a different model for each task, which clearly limits the applicability of this approach. In contrast, we are aiming at learning a general purpose representation which hopefully can be easily transferred to a broader range of tasks. The aforementioned comparison between [24] and our StableRep also suggests that our representation has stronger transferability.
> According to the experimental results, pre-training on synthetic data helps to improve the performances on most of the datasets while may introduce degradation some of the datasets (e.g. CIFAR-10)
We have identified this issue. This is due to the image resolution: cifar-10/100 has very low resolution image (32x32 pixels), while other evaluation datasets have much higher resolutions. To test this hypothesis, we also downsampled the other 9 evaluation datasets to 32x32 pixels before we evaluate. The results and explanations are presented in question 2 of the "global rebuttal" (please find it around the top of the page). A fix may be to randomly downsample the synthetic image to very low resolution during StableRep training, which we leave as future work given the limited rebuttal time.
> How about combining the synthetic and real datasets together for pre-training?
This is a good suggestion, and the results of combining synthetic and real datasets are as below.
For SimCLR:
| method | pre-train data | cifar-10 | cfiar-100 | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|:--:|
| SimCLR | Real | 88.3 | 70.3 | 61.5 | 72.3 | **73.0** |
| SimCLR | Syn | 84.8 | 65.2 | 63.7 | 72.7 | 70.8 |
| SimCLR | Syn+Real | **88.7** | **71.9** | **64.1** | **74.9** | 71.9 |
For CLIP:
| method | pre-train data | cifar-10 | cfiar-100 | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|:--:|
| CLIP | Real | **94.0** | 79.0 | 70.3 | 81.2 | 86.7 |
| CLIP | Syn | 87.3 | 69.5 | 67.8 | 79.1 | 83.7 |
| CLIP | Syn+Real | 93.9 | **80.2** | **73.3** | **83.0** | **87.3** |
For StableRep (StableRep items here only use two views per caption, and are trained with only 15 epochs):
| method | pre-train data | cifar-10 | cfiar-100 | ImageNet LP | Downstream avg | few-shot avg|
|--|:--:|:--:|:--:|:--:|:--:|:--:|
| StableRep (2 views) | Syn | 88.5 | 71.5 | 67.9| 78.0 | 81.4 |
| StableRep (2 views) | Syn+Real | **92.2** | **76.1** | **69.1** | **79.3** | **84.2** |
With these results, we conclude:
- the cifar-10/100 resolution issue causes both CLIP and SimCLR to drop if only trained on synthetic data, but combining both synthetic and real datasets significantly alleviate this issue (and often surpass real-only).
- combining synthetic and real datasets uniformly improve all methods
We hope that these discussion and results help address your concern and lead to a favorable increase of the score. Please don’t hesitate to let us know for any additional comments or suggestions.
[a] Fake it till you make it: Learning transferable representations from synthetic ImageNet
---
Rebuttal 2:
Title: Any further questions/concerns are more than welcome
Comment: Dear Reviewer HH2w,
We would like to thank the reviewer again for your time and effort. Except for rebuttal in this thread, we have made a summarization of new (and maybe interesting) results, check [here](https://openreview.net/forum?id=xpjsOQtKqx¬eId=F3K5WcycnW). We believe this evidence, together with our initial rebuttal should be able to address all your concerns.
Given there is only **< 3 days left** in the discussion period, we want to let you know that we would love to take any additional questions/comments. Meanwhile, if the concerns of the reviewer are clarified and the reviewer is convinced of the novelty and completeness of our work, we'd be grateful if the reviewer could update your review to reflect that. Once again, many thanks for your time and dedication to the review process, we are extremely grateful. | null | null | null | null | null | null |
Fitting trees to $\ell_1$-hyperbolic distances | Accept (poster) | Summary: The paper introduces a new algorithm, HCCRootedTreeFit, for fitting tree metrics to a given distance matrix. The algorithm is designed to minimize the ℓ1 distortion of the fit. The authors provide a detailed explanation of the algorithm, its theoretical properties, and an extensive experimental evaluation. The results show that the algorithm performs optimally when datasets are close to tree-like and when distortion is measured in the ℓ1 sense. The paper suggests that commonly used datasets, especially in geometric graph neural nets, are not well-represented by trees, indicating the need for more refined geometric notions for learning tasks with these datasets.
Strengths: The algorithm is theoretically sound, thoroughly evaluated, and performs optimally on tree-like datasets. The work provides valuable insights into dataset characteristics and has significant potential for practical applications, particularly in machine learning and data analysis.
Weaknesses: The evaluation of the proposed algorithm is primarily focused on its performance on tree-like datasets. While this is certainly important, it would be beneficial to see how the algorithm performs on a broader range of datasets, particularly those that are not tree-like. This would provide a more comprehensive understanding of the algorithm's performance and its applicability to real-world problems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please solve the weakness above
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * Thank all the reviewers for their comments and fruitful feedback. Below each review, we have posted a rebuttal that directly addresses concerns and clarifies any misunderstandings. If you wish to obtain further clarification, please reply in the relevant thread, and we will get back to you as soon as possible.
* Thank you for your review and the constructive comments. We appreciate that you note our paper "provides valuable insights" into dataset characteristics. We address your comment from the "Weaknesses" section below.
* The evaluation of the proposed algorithm is primarily focused on its performance on tree-like datasets. While this is certainly important, it would be beneficial to see how the algorithm performs on a broader range of datasets, particularly those that are not tree-like. This would provide a more comprehensive understanding of the algorithm's performance and its applicability to real-world problems.
* We understand the point that more results on a broader range of data sets would make our discussion more fruitful and comprehensive. We will conduct extra experiments on the Euclidean dataset in order to 1) compare with other popular tree fitting algorithms works on Euclidean points (such as QuadTree) as a baseline and 2) make an evaluation on a broader range of datasets, which are NOT tree-like at all.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply. May I ask in what application scenarios the author's method can be used and what kind of problems it can solve?
---
Reply to Comment 1.1.1:
Comment: There are many uses of hyperbolic embeddings in geometric graph neural nets, visualizing and representing hierarchical data, for understanding hierarchical relations amongst data points, for representing tree-like sparse graphs, and most specifically, for studying phylogenetic data (e.g., evolution of organisms, mutations of viruses, etc). | Summary: The authors consider the tree fitting problem for a given distance. The authors cast the tree fitting problem as finding the relation between the hyperbolicity vector and the error of tree embedding. The authors propose an algorithmic approach with provably tight $\ell_1$ error. The authors also illustrate the advantage of the proposed approach on some “tree-like” datasets.
Strengths: + By casting the fitting tree problem as finding the relation between the hyperbolicity vector and the error of tree embedding, the authors propose provably tight $\ell_1$ error algorithmic approach.
+ The authors illustrate the advantage of the proposed approach on some “tree-like” datasets.
+ Overall, the presentation is good. (It may be better to elaborate more details on the proposed algorithmic approach in Section 3.3)
Weaknesses: + At the heart of the proposed algorithmic approach (Section 3.3), although the authors summarize their proposed approach in several Algorithms, it is hard to get the ideas how the tree is constructed. It seems the authors focus more on analysis for the proposed algorithm.
+ It seems better to include discussions about other existing approaches of tree fitting for a given distance. Besides the provable analysis of the proposed algorithmic approach, it is not clear how the proposed approach addresses some limits of existing approaches on tree fitting for a given distance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: + Could the authors elaborate the main ideas of the proposed algorithmic approach in Section 3.3? (although some algorithms are given to summarize the ideas, it is better to discuss about it, e.g., how the tree is constructed to fit a given distance, besides the analysis)?
+ Could the authors discuss how the proposed algorithmic approach improves over some simple approach, such as the QuadTree approach (which is popularly used in the context of optimal transport)?
+ If possible, please consider the QuadTree approach as a baseline for tree fitting in the experiments.
+ In Algorithm 1, could the authors explain the “highlyconnected” concept? How is it defined as in the Algorithm 1?
+ In line 269, why do the negative weights appear? (for tree metric, should the weights be nonnegative?) It seems unclear whether these approaches try to find a closest tree metric for a given distance?
Some minor points:
+ In line 58, “the table below”? which table is mentioned? It is better to have a reference for the mentioned table.
+ In Theorem 3.4, could the authors comment on the constant $C=4$? Is it a tight result or is there any chance to improve it?
---
Thank you for the rebuttal.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are no discussions about the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * Thank all the reviewers for their comments and fruitful feedback. Below each review, we have posted a rebuttal that directly addresses concerns and clarifies any misunderstandings. If you wish to obtain further clarification, please reply in the relevant thread, and we will get back to you as soon as possible.
* Thank you for your review and the constructive comments. We appreciate that you note our paper has a good presentation. We address your comments from the "Weaknesses" and "Questions" section below.
* At the heart of the proposed algorithmic approach (Section 3.3), although the authors summarize their proposed approach in several Algorithms, it is hard to get the ideas how the tree is constructed. It seems the authors focus more on analysis for the proposed algorithm.
* We will add more sentences about the intuition behind the algorithm in order to elucidate the idea. Our tree construction is a bottom-up approach using the sorted order of "Gromov product" over all pairs. Then we sequentially add Steiner nodes to "fit" those products.
* Many of the existing tree fitting methods suffer from a lack of analysis (e.g., TreeRep, NeighborJoin). Even Gromov's original work, while mathematical, isn't presented as an algorithm. It's only a few recent papers that have begun to do this analysis, including ours.
* It seems better to include discussions about other existing approaches of tree fitting for a given distance. Besides the provable analysis of the proposed algorithmic approach, it is not clear how the proposed approach addresses some limits of existing approaches on tree fitting for a given distance.
* As we illustrated in the synthetic experiments, ours and other empirical algorithms behave quite differently on the synthetic data sets. This suggests that existing approaches may underperform on certain “tree-like” inputs, and that common data sets are not nearly as tree-like as researchers thing. Also, as far as we know, both the $\ell_1$ distortion bound and our approach is novel: there are some theoretical works which seek to minimize the $\ell_1$ distortion (including the recent result [1], which is actually an O(1) approximation), but they cannot be implemented practically.
* Could the authors elaborate the main ideas of the proposed algorithmic approach in Section 3.3? (although some algorithms are given to summarize the ideas, it is better to discuss about it, e.g., how the tree is constructed to fit a given distance, besides the analysis)?
* The intuition behind IsHighlyConnected is that if every node in both clusters is reasonably connected with those in the other cluster, then the number of bad triangles can be favorably controlled.
* Could the authors discuss how the proposed algorithmic approach improves over some simple approach, such as the QuadTree approach (which is popularly used in the context of optimal transport)? If possible, please consider the QuadTree approach as a baseline for tree fitting in the experiments.
* QuadTree needs to input data points (Euclidean), while ours only requires a distance matrix. Therefore ours works on a more generic setup. We will conduct extra experiments on the Euclidean dataset in order to 1) compare with QuadTree (and other popular) algorithms as a baseline and 2) make an evaluation on a broader range of datasets, which are NOT tree-like at all.
* In Algorithm 1, could the authors explain the “highlyconnected” concept? How is it defined as in the Algorithm 1?
* The intuition behind IsHighlyConnected is that if every node in both clusters is reasonably connected with those in the other cluster, then the number of bad triangles can be favorably controlled.
* We will add a figure which helps to describe the concept like https://i.imgur.com/u5uyw68.jpg.
* In line 269, why do the negative weights appear? (for tree metric, should the weights be nonnegative?) It seems unclear whether these approaches try to find a closest tree metric for a given distance?
* For a set of distances to satisfy a (tree) metric, all edge weights should be nonnegative. It has been shown, however, empirically that negative weights can appear in other algorithms, such as NeighborJoin or TreeRep. If the input distances satisfy a metric, then our algorithm is guaranteed to output a proper tree fitting with all non-negative weights. (This cannot be guaranteed for other tree fitting algorithms.)
* In line 58, “the table below”? which table is mentioned? It is better to have a reference for the mentioned table.
* Should be labeled as "Table 1". Thanks for pointing this out!
* In Theorem 3.4, could the authors comment on the constant $C = 4$? Is it a tight result or is there any chance to improve it?
* We do not think that $C = 4$ is tight but we couldn't show it. That improvement seems difficult. We also did not yet find an example which forces $C > 1$.
* The reason we emphasized such constant $C$ is that the usual (unweighted) CC problem can be simply solved with $C = 1$, which is actually tight. To get more detailed, if we have an unweighted graph, we are able to find a clustering that the number of disagreement edges is bounded by the number of bad triangles (with the same definition). For example, an algorithm presented at ``Aggregating Inconsistent Information: Ranking and Clustering`` (Ailon et al., 2008) achieves this bound. Also, when analyzing this problem, considering such triangles plays an important role. That is why we are somewhat optimistic that $C = 4$ can be improved with, possibly tight results on $C = 1$.
---
Rebuttal Comment 1.1:
Title: Further discussion needed?
Comment: Dear reviewer,
Please let us know what you think about the rebuttal and whether you have any points you want to raise.
Thank you,
AC
---
Rebuttal Comment 1.2:
Comment: Thank you for the rebuttal, I have no other raised points. | Summary: The paper formulate the $l_p$ tree fitting problem introduces a new algorithm, HCCROOTEDTREEFIT, for building trees in hyperbolic space by investigating the relationship between hyperbolicity (ultrametricity) vectors and the error of tree (ultrametric) embedding, which outperforms previous methods both theoretically and empirically.,
Strengths: 1) The authors have developed a novel approach to the tree-fitting problem that applies hyperbolic geometry and geometric group theory.
2) The developed algorithm, HCCROOTEDTREEFIT, delivers a tree metric with $l_1$ distortion bounded by polynomial function of the average hyperbolicity, while previous research delivers a tree metric with $l_{\infty}$ distortion only.
3) Comparison not only on performance but also on speed are explored, which makes a more computational sense.
4) Repeated experiments are conducted and standard deviation was analyzed.
Weaknesses: The paper is not well self-contained, needing additional supplementary materials to be complete.
Table captions are not very comprehensive, e.g., Table 3 and Table 5, which increase the difficulty to understand the major experimental results.
The results lack qualitative examples to show 1) how different the proposed tree-fitting method is compared with previous tree-fitting methods. 2) how different the synthetic datasets is to real datasets.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: As described in the abstract, the proposed method is inspired by geometric group theory, can the authors be more specific on what is inspired the novel approach to the tree-fitting problem using which part of geometric group theory?
What does the notation in equation between line 82 and 83 mean? $\begin{pmatrix} X \\\\ 3 \end{pmatrix}$ seems to mean combinatoric number in convention but $X$ here is a set.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors does not discussed about limitations, but this method is quite theoretical and is very unlikely to have potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * Thank all the reviewers for their comments and fruitful feedback. Below each review, we have posted a rebuttal that directly addresses concerns and clarifies any misunderstandings. If you wish to obtain further clarification, please reply in the relevant thread, and we will get back to you as soon as possible.
* The paper is not well self-contained, needing additional supplementary materials to be complete.
* It was hard to contain every detail for the algorithms and proofs within a certain page limit so we put the most technical proofs in supplementary materials.
* Table captions are not very comprehensive, e.g., Table 3 and Table 5, which increase the difficulty to understand the major experimental results.
* We will include more details on table captions; thank you for your feedback! For example, the caption of Table 3 could be “$\ell_\infty$ error (i.e.,$||d - d_T||_\infty$, max distortion) over each methods and data sets.”
* The results lack qualitative examples to show 1) how different the proposed tree-fitting method is compared with previous tree-fitting methods. 2) how different the synthetic datasets is to real datasets.
* Many of the existing tree fitting methods suffer from a lack of analysis (e.g., TreeRep, NeighborJoin). Even Gromov's original work, while mathematical, isn't presented as an algorithm. It's only a few recent papers that have begun to do this analysis.
* The synthetic data sets are designed to emphasize and to provide specific control on the proxy measures of “tree-likeness”. These examples show that 1) the current widely accepted notions of hyperbolicity and tree-likeness of certain data sets may be imperfect. Also, we would argue that the quantitative analysis on such common data sets is absent as well.
* As described in the abstract, the proposed method is inspired by geometric group theory, can the authors be more specific on what is inspired the novel approach to the tree-fitting problem using which part of geometric group theory?
* Gromov's original work was in Geometric Group Theory; he was a pioneer of the field. We refer to his “algorithm” which is a very small part of the original work. And $\delta$-hyperbolicity also came from his work.
* What does the notation in equation between line 82 and 83 mean? $\binom{X}{3}$ seems to mean combinatoric number in convention but $X$ here is a set.
* There is also a convention for defining $\binom{X}{k}$($X$ choose $k$) for a set $X$, as the set of all $k$-element subsets of $X$ in a very similar fashion. See, for example, chapter 1.2 of Stanley’s Enumerative Combinatorics.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. I would like to discuss further the last three points:
* This question pertains to **qualitative** examples. It would be beneficial to include visualizations to demonstrate the superiority of the proposed method, especially since hyperbolic geometry is a mainstream tool in the visualization community [c1].
* Gromov's original work in Geometric Group Theory is cited in almost all the related works. However, my inquiry regarding **being specific** wasn't adequately addressed.
* In chapter 1.2 of Stanley’s "Enumerative Combinatorics" [c2] and a more recent version [c3], the convention is **NOT about subsets**. Instead, it is also used for combinatorial numbers. Could the authors clarify the origin of this convention?
[c1] Lamping J, Rao R, Pirolli P. A focus+ context technique based on hyperbolic geometry for visualizing large hierarchies[C]//Proceedings of the SIGCHI conference on Human factors in computing systems. 1995: 401-408.
[c2] Stanley R P. Enumerative combinatorics, v. 1[J]. Wadsworth and Brooks/Cole Mathematics Series, Monterey, California, 1986.
[c3] Stanley R P. Enumerative Combinatorics Volume 1 second edition[J]. Cambridge studies in advanced mathematics, 2011.
---
Reply to Comment 1.1.1:
Comment: Strictly speaking we are not using the "group" portion of the phrase "geometric group theory." One of the origins of the field was to study the Cayley graphs of finitely generated groups. If one endows such Cayley graphs with metrics and considers these graphs as metric spaces, sometimes they are hyperbolic spaces (and the groups are referred to as hyperbolic groups). The machinery for considering both graphs and hyperbolic spaces is useful, however, despite the disconnection from a group.
---
Reply to Comment 1.1.2:
Comment: In chapter 1.2 of Stanley’s "Enumerative Combinatorics" (We both checked [c2] and [c3]), they stated that "Now define $\binom{S}{k}$ (sometimes denoted $S^{(k)}$ or otherwise, and read "S choose k") to be the set of all $k$-element subsets (or $k$-subsets) of $S$. And "then" they defined the usual combinatorial number. While we understand this convention is not widely accepted as the number convention, we have found that several references already used this notation, probably due to its convenience. | Summary: This paper introduces a novel tree fitting algorithm named HCCRootedTreeFit. First, the authors motivate the need for a better tree fitting algorithm by stating that current methods "assume almost nothing about the underlying discrete point set, when, in fact, many real application data sets are close to hierarchical or nearly so".
Before introducing their new method, the authors introduce new proxy measures of how tree-like a dataset is. These proxy measures are used later on in several bounds, including in their main theoretical result which guarantees the existence of a tree fitting method with some nice bound on the distortion.
In their theoretical analysis, the authors mention a connection between the tree fitting problem and hierarchical correlation clustering (HCC), along with an equivalence result between tree fitting algorithms and ultrametric fitting algorithms. Then, they introduce an adapted HCC problem formulation and three algorithms named HCCTriangle, HCCUltraFit and HCCRootedTreeFit. The first algorithm solves the adapted HCC problem. The second algorithm uses HCCTriangle to solve the ultrametric fitting problem. The last algorithm uses HCCUltraFit to solve the rooted tree fitting problem.
In the experiments, the HCCRootedTreeFit algorithm is tested versus various other methods from the literature. For these experiments the authors have used both common and synthetic datasets. They observe that their method underperforms on common datasets, but performs well on synthetic datasets with respect to the $\ell_1$ norm. They attribute this observation to a lack of tree-like structure in the common datasets. They furthermore observe that their method underperforms with respect to the $\ell_\infty$ norm.
Strengths: 1. The paper introduces interesting new notions of hyperbolicity and ultrametricity through their hyperbolicity and ultrametricity vectors.
2. The authors derive an interesting connection between the distortion bounds for tree fitting and ultrametric fitting algorithms.
3. The new notion of hyperbolicity is used to show that, contrary to common belief, the common datasets are not very tree-like.
Weaknesses: The paper is very difficult to read. Due to the very large number of definitions and technical results, the paper reads more like a collection of statements than a paper. Much of the actual content seems to be put in the appendix, adding to the feeling that the paper is not at all self-contained. Moreover, the paper appears to use many tricks from different papers without any description of these tricks. For example, in line 233, the authors refer to some paper to obtain a tree fit without actually explaining the actual procedure.
The structure of the paper also makes it rather difficult to follow, with several forward- and backward references spread throughout the paper. As an example, Subsection 3.2 appears to state several results regarding algorithms that have not yet been introduced. Then, in the next subsection, these algorithms are introduced and following these algorithms, the authors quickly throw in a proof of a result from the previous Subsection.
Another issue with the paper is that they first formulate the tree (ultrametric) fitting problem, but then, after quickly mentioning a connection to hierarchical correlation clustering, seem to actually solve an adapted version of this HCC problem. However, the introduction of their adapted HCC problem is rather unclear and uses, for example, a 'number of disagreement edges' term that is not properly defined in the paper. Also, it is not really clear to me what the connection with this problem and the original problem is.
Due to these issues I cannot adequately judge the validity of the many technical results of this paper.
There is also an issue regarding the newly proposed algorithm within the context of the experimental results. The authors show that their method underperforms on common datasets with respect to both the $\ell_1$ norm and the $\ell_\infty$ norm. Moreover, the method is quite slow compared to two of the other methods. It therefore appears that this method is only useful in synthetic settings.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. What is the exact connection between the tree fitting problem and the HCC problem?
2. How is the term $|E_t \Delta E(P_t)|$ defined precisely?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Mostly yes. They do not really address the fact that their method does not seem to have a proper use case, but they do point out its weaknesses in the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: * Thank all the reviewers for their comments and fruitful feedback. Below each review, we have posted a rebuttal that directly addresses concerns and clarifies any misunderstandings. If you wish to obtain further clarification, please reply in the relevant thread, and we will get back to you as soon as possible.
* Thank you for the review and the constructive comments. We appreciate that you note our paper “derive an interesting connection” between the distortion bounds. We address your concerns from the “Weaknesses” and “Questions” section below.
* The paper is very difficult to read. Due to the very large number of definitions and technical results, the paper reads more like a collection of statements than a paper. Much of the actual content seems to be put in the appendix, adding to the feeling that the paper is not at all self-contained. Moreover, the paper appears to use many tricks from different papers without any description of these tricks. For example, in line 233, the authors refer to some paper to obtain a tree fit without actually explaining the actual procedure.
* It was hard to contain every detail for the algorithms and proofs within a certain page limit so we put the most technical proofs in supplementary materials. However, we will absolutely work on improving its readability. For example, as you noted, line 233 -- 236 should clearly refer to Algorithm 4 in order to make the procedure more clear. Thank you for this constructive feedback!
* The structure of the paper also makes it rather difficult to follow, with several forward- and backward references spread throughout the paper. As an example, Subsection 3.2 appears to state several results regarding algorithms that have not yet been introduced. Then, in the next subsection, these algorithms are introduced and following these algorithms, the authors quickly throw in a proof of a result from the previous Subsection.
* Again, the structure was not optimal as many technical proof details need to go in the Appendix due to space constraints initially. We will work on improving the structure.
* Another issue with the paper is that they first formulate the tree (ultrametric) fitting problem, but then, after quickly mentioning a connection to hierarchical correlation clustering, seem to actually solve an adapted version of this HCC problem. However, the introduction of their adapted HCC problem is rather unclear and uses, for example, a 'number of disagreement edges' term that is not properly defined in the paper. Also, it is not really clear to me what the connection with this problem and the original problem is.
* There are a number of references which make a connection between the tree (ultrametric) fitting problem and the HCC problem: we addressed this question below. The connection to “adapted” HCC problem is that the quantity of `a number of disagreement edges’ (which should be properly defined using the symmetric difference notion; thanks for pointing this out) is related to $\ell_1$-notion of hyperbolicity vector, so that we made a statement using $\ell_1 / \ell_1$ tree fitting problem (Definition 2.2). This work potentially develops and broadens the notion of such known “tree-likeness”.
* There is also an issue regarding the newly proposed algorithm within the context of the experimental results. The authors show that their method underperforms on common datasets with respect to both the $\ell_1$ norm and the $\ell_\infty$ norm. Moreover, the method is quite slow compared to two of the other methods. It therefore appears that this method is only useful in synthetic settings.
* We understand the point that our algorithm does not practically outperform currently known methods. As we illustrated in the synthetic experiments, we want to highlight that the current widely accepted concepts on hyperbolicity and tree-likeness of certain data sets may be imperfect, which can be observed in some synthetic examples and experiments. We suggest that future works on achieving more theoretically sound bounds will be interesting: it could also work well on common data sets?
* What is the exact connection between the tree fitting problem and the HCC problem?
* The technical connection between ultrametric fitting problem and tree fitting problem can be described as the reduction from tree metric to ultrametric. In other words, if we have an adequate ultrametric fitting algorithm, then we can use it as the subroutine to develop a tree fitting algorithm. This connection was first developed by [7], which is already known. The connection between the HCC problem and the ultrametric fitting problem is also a “reduction”: there is a reduction from ultrametric to HCC. While there are a number of references describing this connection (including [2]), the connection we explored and utilized is similar to the work on [1].
* We included these technical details in the Appendix in order to be self-contained.
* How is the term $|E_t \Delta E(P_t)|$ defined precisely?
* Given a partition $P$ over $X$, we define $E(P) := \cup_{C \in P} \binom{C}{2}$. In other words, $E(P)$ is the collection of *every* edge in the clusters of $P$. $\Delta$ denotes the symmetric difference of two sets, namely $A \Delta B := (A \setminus B) \cup (B \setminus A)$. We will add these descriptions to make the statement precise; thank you for your feedback!
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their clarifications.
I understand that including the proofs and details of the algorithms is difficult given the page limit. However, I do strongly recommend the authors to:
1. add some explanations whenever results from other papers are introduced and applied;
2. restructure the paper to improve readability and remove forward- and backward references when possible.
---
Reply to Comment 1.1.1:
Comment: Thank you! We will revise accordingly. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation | Accept (poster) | Summary: This paper proposes a conditional 3D generation method by pre-aligning features of different modalities when training a 3D AE. After that, a latent diffusion model is applied to generate latent vectors for 3D decoder conditioned on text/image. The results quantitatively and qualitatively shows that the proposed method is able to achieve impressive results.
Strengths: 1. The results show that this method is able to achieve multiple tasks, which is good and important for a generation-related paper.
2. The main figure is easy to understand and follow, making this paper easier to read.
3. The core idea in this paper looks original.
Weaknesses: 1. No qualitative results show the diversity of the proposed method. Such a result is important to a generation-related paper.
2. The concept of pre-alignment is not very convincing. Other conditional generation methods are also aligning text/image to the shape latent space by either pulling their representations together or learning diffusion models. Further discussion on how this concept is better than others should be added.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. If you have paired data for text-image-shape alignment training, why not just directly train a latent diffusion model conditioned on image/text to generate 3D objects like SDFusion[1]? In contrast, if you already aligned text/image/shape features in latent space, is it still necessary to use a heavy conditional generator such as latent diffusion to achieve your tasks? It feels like you only need one of these two techniques.
2. In your qualitative comparison, you mentioned that 3DS2V generates more details of 3D shapes and discontinuous surfaces and noisy results. However, such results are not seen in their paper, I wonder the reason why they fail to produce good results in Figure 2 and 3. Is it because you re-train their model on your dataset or because you directly use their pre-trained checkpoint or other reasons? More discussion and insight should be addressed.
3. You pre-aligned the text/image/shape feature first and use text/image feature to train a latent diffusion model to achieve conditional generation. However, as far as I know, if the input condition and the target feature are too similar when training the diffusion model, the diversity of the generation result would be limited and the purpose of using the diffusion model would disappear. How do you avoid such a situation from happening?
4. Please answer the problem mentioned in the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our paper's originality and importance to the community. Thanks for the appreciation for our visual results demonstrations and thoughtful comments. We will attempt to address your concerns in four aspects in the following.
**Q1: Qualitative results of the diversity.**
**A1:** We show qualitative results in Figure.1 of the rebuttal PDF, demonstrating the generated results' diversity. Moreover, we further illustrate the Nearest Neighbors from the training samples to verify that the trained model does not memorize the training samples in Figure.2 or the rebuttal PDF.
**Q2: Motivations and effectiveness of alignment-before-generation approach.**
**A2:** We recall our goals at the beginning of the response, developing a conditional 3D shape generative model to generate high-quality 3D shapes to conform to the given images/texts. And the model should work on all the categories of data, as many as we have, instead of a single category or a few.
To achieve this, we start exploring training conditional 3D shape generative models on the whole ShapeNet, a large-scale 3D shape benchmark, several months ago. During the exploration, we find that even if a conditional 3D shape generative model trains successfully, the samples from the model do not conform to the corresponding conditions well, like the generated results of the text-to-image generative models. Besides, it is difficult for the model to map the condition to the corresponding 3D shapes for those categories with few training data.
We observe that generating a 3D shape based on a 2D image or text is an ill-posed problem since conditions only provide partial information while the remaining information is missed, necessitating priors from the latent space of 3D shape distribution. However, the latent space of 3D shapes significantly differs from 2D images and texts and leads to difficulties for the model in mapping 2D information and textual information to valid priors in a pure 3D shapes latent space.
However, most previous works tend to encode the 3D shapes into the latent space that contains only geometry information without considering the semantic information, and none of the existing works manage to train an image-/text-conditioned generative model on the whole ShapeNet. Therefore, we turn to represent 3D shapes in a latent space that captures both geometric and semantic information, which alleviates the difficulties for the generative models in mapping semantic information to 3D shape priors. Moreover, the success of DALLE-2, a generative model that first trains a prior model from an aligned CLIP text space to an aligned CLIP image space, inspires us to devise the alignment-before-generation approach.
As a result, our model is the first that performs image- and text-conditioned 3D shape generation tasks on the whole ShapeNet categories, and the model exhibits generalization ability to unseen categories to some extent. Moreover, with the release of another large-scale dataset, Objaverse, and the incoming Objaverse-XL, we believe that our method can extend to and scale up to such a large-scale dataset (Currently, we have successfully scaled up our method in Objaverse, and the qualitative results in Figure.1 of the rebuttal PDF and the pre-trained model will be released.).
**Q3: Relations between alignment and conditional generation.**
**A3:** Thanks for the reviewer's interest in our alignment-before-generation approach. Alignment and conditional generative models could coexist, and our experiments reveal that alignment before generation enhances the generative model.
* Contrastive pre-training cannot bring the generative ability to a model such as CLIP. It aims to train the model to capture multi-modal information in an aligned latent space. For SITA-VAE, contrastive learning constrains the model to capture geometric and semantic information in an aligned latent space. However, SITA-VAE can only reconstruct the neural field of a given 3D shape latent, necessitating a generative model for producing a 3D shape latent.
* Our experimental results further prove the effectiveness of the training generative model on the aligned latent space. Besides, the recent foundation text-to-image generative model DALLE-2 also indicates that learning on the aligned latent space enhances the generation process. Specifically, DALLE-2 trains a prior network that maps text embeddings to image embeddings on the latent space of CLIP and a decoder network to generate images with given image embeddings. The generative ability of DALLE-2 also proves the effectiveness of training the generative models on the aligned latent space.
**Q4: Clarification on the comparison with 3DS2V.**
**A4:** According to the paper, 3DS2V only trains image-/text-conditioned generation tasks on a subset of the ShapeNet. And no pre-trained checkpoints are available on the all-category ShapeNet at that time. To make a fair comparison, we re-train 3DS2V with the same CLIP condition and parameters as ours. It has two failure cases: the first is that although the model successfully maps given conditions to high-quality 3D shape latent, it only decodes a high-quality 3D shape, which is not visually or semantically close to the given condition. Another is even worse since mapping 2D information to 3D shapes is an ill-posed problem. When the model fails to sample validated 3D latent codes, and results in a noisy shape by the decoder. Visual and numerical results prove our statements. We observe that the image-conditioned generated results from 3DS2V's paper in Figure 13 are not as good as the category-conditioned generated results in Figure 8. It reflects our observation.
---
Rebuttal Comment 1.1:
Title: Following comment
Comment: Thank the authors for the hard work! The feedback has addressed most of my concerns.
I will raise my rating to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for the constructive comments and positive feedback!
Comment: Thanks for recognizing our response. We are glad about the favorable assessment of our paper. We will rearrange the qualitative results to demonstrate the generated diversity in the revision. | Summary: This work is about 3D shape generative model focused on image-conditioned and text-conditioned generation. The authors aligned the latent space of a shape autoencoder to CLIP's image encoder and text encoder. Then generative diffusion models are trained on the aligned latent space. This enables shape generation given image or text as conditional input. The authors showed some good results in both tasks.
Strengths: The authors showed some good generation results for both the task of image-conditioned generation and text-conditioned generation. The writing is also clear and easy to follow.
Weaknesses: 1. The autoencoder network (Fig 1 a) is similar to the network used in [63]. Thus the performance boost shown in Table 1 seems to be because of CLIP. This should be emphasized or ablated somewhere in the main paper.
2. Following the above comment (Table 1), the authors only compared with "Learned queries" results from [63]. According to [63], another design "Point queries" achieved better results than both this work and "Learned queries".
3. It would be better if the authors can show some visualization comparisons of the autoencoding results.
4. L136, the length is (1+L_i) instead of L_i. This is explained in later sections (L217) but is still causing confusion.
5. Another difference with respect to [63] is, when training the diffusion models, this work used a unet-style transformer [4] instead of a simple stacked self attention network.
6. Some pieces of writing can still be improved. For example, when talking about a design or an equation, we should mention why we are doing this or discuss some insights behind this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This work proposed a method for image-conditioned and text-conditioned shape generation. The authors combined several components (3DS2V-style shape autoencoding network, clip and unet-style transformer denoising network). All these components are not proposed by this work which weakens the novelty of this paper. However, I still believe the authors delivered some good results in shape generation. I hope the authors can clarify my concerns mentioned in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to reviewer XFVg for the positive feedback and insightful comments. Moreover, we are encouraged that the reviewer appreciates our results and enjoys reading the manuscript. We reply to the concerns in six aspects below. Furthermore, we will update Table 1 and related discussion in future revisions.
**Q1: Insights and motivations.**
**A1**: At the beginning of our response, we reiterate our goals, insights, and discovery.
1. We study generating general 3D shapes based on given 2D images or texts.
2. Even if a conditional 3D shape generative model trains successfully on the whole Shapenet dataset, the models' samples do not meet the given conditions enough. Besides, for those categories with few training samples, the model is hard to map the condition to the corresponding 3D shapes.
3. We observe that generating a 3D shape based on a 2D image or text is an ill-posed problem since conditions only provide partial information while the remaining information is missing, necessitating priors from the latent space of 3D shape distribution. It leads to difficulties for the model mapping 2D and textual information to valid priors in a pure 3D shapes latent space.
4. However, most previous works tend to encode the 3D shapes into the latent space that contains only geometry information without considering the semantic information, and none of the existing works manage to train an image-/text-conditioned generative model on the whole Shapenet dataset.
5. We turn to represent 3D shapes in a latent space that captures geometric and semantic information, alleviating the generative models' difficulties in mapping semantic information to 3D shape priors. Moreover, the success of DALLE-2, which learns on aligned CLIP latent space, inspires us to devise the alignment-before-generation approach.
6. As a result, our model is the first to perform image- and text-conditioned 3D shape generation tasks on the Shapenet dataset.
**Q2: Ablation on CLIP.**
**A2:** Thanks for the thoughtful comments. In the main paper, we only show an unremarkable chart in the first column in Figure.5 in the main texts to ablate the effectiveness of utilizing different Vision-Language models (VLMs). We extend the experiments to ablate the impact of VLMs, and the results are shown in the table.
The last three rows in the table are three cases we used to train the VAE. Ours (w/o C) is a single VAE train without the contrastive loss, Ours (w SLIP) indicates to utilize the image- and text-encoder from SLIP, and Ours (w CLIP) is the SITA-VAE, where we employ CLIP's image- and text-encoder to train the VAE. The result showcases the effectiveness of the utilization of CLIP.
| | Overall | Selected | Table | Chair | Airplane | Car | Rifle | Lamp |
|:-------------------|:-------:|:--------:|:------:|:------:|:--------:|:------:|:------:|:------:|
| 3DS2V (PQ) | 0.967 | 0.967 | 0.971 | 0.964 | 0.969 | 0.969 | 0.96 | 0.956 |
| Ours (w/o Contrsat)| 0.961 | 0.958 | 0.958 | 0.962 | 0.961 | 0.968 | 0.952 | 0.945 |
| Ours (w SLIP) | 0.962 | 0.956 | 0.96 | 0.959 | 0.957 | 0.966 | 0.954 | 0.937 |
| Ours (w CLIP) | 0.966 | 0.964 | 0.965 | 0.966 | 0.966 | 0.969 | 0.967 | 0.95 |
**Q3: Comparison with Point Queries in 3DS2V.**
**A3:** Thanks for carefully figuring out the incomplete comparison in Table 1. We replenish Table 1 and report the results above.
1. Performance and analysis: The performance difference is due to the different sizes of the 3D shape latent. 3DS2V employs a deterministic autoencoder, while SITA-VAE contains a KL block that compresses the 3D shape latent. Therefore, the 3D shape decoder in SITA-VAE reconstructs 3D shapes with the low dimension 3D shape latent, leading to lower performance. Moreover, such results are similar to the conclusion from section 8.1 in 3DS2V, which shows that the compressed latent decreases performance.
2. Motivation for using learnable query embeddings: We design the learnable query embeddings for utilizing the local embedding setting from Perceiver and the global embedding setting from CLIP. Specifically, there embed (L_s + 1) learnable query embeddings in the 3D shape encoder. To learn a semantically and geometrically aligned 3D shape representation globally and locally via the learnable query embeddings, it contains a global head-class token and L_s shape tokens. Moreover, the learnable query embeddings and the cross-attention mechanism allow the model to handle significant inputs, potentially suitable for further scaling up.
**Q4: Comparison of auto-encoding.**
**A4:** We show the visual comparison of auto-encoding in Figure.4 in the rebuttal PDF.
**Q5: Concern 5: Clarification for the notion in line 136.**
**A5:** The image encoder in CLIP is a ViT-based architecture. Thus, the sequence in the encoder consists of one head-class token and L_i local tokens from the patch embeddings, and the sequence length of the last hidden layer in ViT is (1 + L_i). The following is the utilization of tokens.
1. When computing the multi-modal contrastive losses, only the head-class token will be projected into an image embedding in the training process of SITA-VAE.
2. In training generative models, we extract the sequence from the last hidden layer in the image encoder, consisting of one head-class token and L_i local tokens to better capture global and local information.
**Q6: The architecture of the Diffusion Model.**
**A6:** We implement the denoiser based on two architectures. A UNet-like transformer and a stacked attention block network, where each attention block contains a self-attention module and a cross-attention module.
During experiments, we found that the UNet-like transformer denoiser converges with fewer iterations, as concluded in [4], and the training phase is stable. Therefore, we employ the UNet-like transformer architecture for the rest of the experiments.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Please look over the author response and the other reviews and update your opinion. Please ask the authors if you have additional questions before the end of the discussion period.
---
Rebuttal Comment 1.2:
Comment: Thanks for the clarification. I will keep my positive rating.
---
Reply to Comment 1.2.1:
Title: Thanks for the valuable comments and positive feedback!
Comment: Thanks for recognizing our response. We are glad about the favorable assessment of our paper. We will rearrange the experimental comparisons and clarification of the notions in the revision. | Summary: This paper proposed a conditional generation model which aims to solve the alignment issue in image-to-shape or text-to-shape generation. The key idea is to learn a aligned representation among 3D shapes, images, and texts. To achieve that, the author proposes SITA-VAE with contrastive loss to force the shape's latents to be aligned with the pretrained vision-language model. After that, a LDM is trained to learn the diffusion process in the latent space. In the test time, it follows the previous works to use classifier-free guidance to perform conditional generation. The proposed method is evaluated on ShapeNet and Cartoon Monster 3D shapes.
Strengths: * Propose to learn a aligned space for 3D shapes, images, and texts with 2D vision-language model. This essentially leverages the abundant 2D data into 3D modeling. The scarcity of paired data is one crucial reason why conditional 3D modeling does not perform as well as its 2D counterpart.
* The conditional generation results look great and align with the inputs when compared with the baselines.
* It is surprised that the aligned space does hurt the reconstruction results.
Weaknesses: * Novelty is somewhat limited. The first stage model is mostly based on 3DILG and the multi-modality conditional generation has been explored in previous work such as SDFusion.
* Scalability is an issue as the model need paired 3D-image-text data to work.
* Some parts of the writing and figure are misleading. The choices of the metric is problematic in Table 2.
Please see the "Questions" for the details.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The results in Fig. 2 seems a bit too sharp and I wonder if the model is memorizing the outputs. Can the author shows the nearest neighbor sample from the datasets?
2. In figure 1, the model is outputting occupancy but the final result is denoted as "mesh". I think the output should be occupancy and the mesh is obtained via marchingcube, right? Learning to generate mesh directly is very different from generating occupancies. The writing also use the word "mesh" and it is a bit confusing.
3. In Table 2, the use of SITA to evaluate the performance of conditional generation is unfair compared to other baselines since the LDM is trained on the space learned with SITA.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors have address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to reviewer rVoM for the thoughtful comments. We appreciate the reviewer's approval of our align-before-generation approach. Furthermore, we are encouraged that the reviewer recognize our visual demonstrations. We dedicated replies to the reviewers' comments and questions in five aspects below.
**Q1: Novelty.**
**A1**:In the beginning, we acknowledge that image-/text-conditioned 3D shape generation tasks exist. Still, please reiterate that our alignment-before-generation approach is the first to train an image-/text-conditioned 3D shape generative model on the whole Shapenet Dataset. Furthermore, we are working on scaling our model on a larger scale 3D benchmark, and the model exhibits good scalability in our experiments.
1. The difference between 3DILG and the first stage model, Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE).
a. First, 3DILG only encodes 3D shape information in their latent space. In contrast, our SITA-VAE aligns semantic information into the 3D shape latent space by leveraging contrastive learning;
b. The second difference is that 3DILG encodes the 3D shape as paired explicit center-coordinates and points features, which requires Farthest Points Sampling and k-nearest-neighbor clustering to handle the input. At the same time, our SITA-VAE utilizes the learnable query embedding to encode the 3D shapes implicitly, and our encoder could directly learn geometric and semantic information from the whole input point clouds;
c. The third is that 3DILG employs eight-layer multi-layer perceptrons (MLPs) to reweight explicit centers-coordinates and the output tokens from its decoder and predict the occupancy of query points, while the decoder in our SITA-VAE utilizes cascade attention modules to predict the occupancy of query points;
d. 3DILG trains in a VQ-VAE manner.
2. The difference between our Aligned Shape Latent Diffusion Model (ASLDM) and previous multi-modality conditional generation, such as SDFusion.
a.First, our ASLDM differs from the previous methods in that ASLDM learns on an aligned latent space. Still, previous conditional generative models almost train on a single modal space.
b.The second is the representation of 3D shape latents. SDFusion encodes 3D shapes as discrete latent voxels via VQ-VAE, which is potentially challenging to scale up to fit higher resolutions shapes or larger datasets because it achieves it by using higher-resolution discrete latent voxels or larger codebooks. In contrast, our ASLDM learns on the continuous shape latent with low dimensions, which are flexible. After successfully training on the whole Shapenet dataset, we further train the model on a larger dataset, and the results in Figure.1 in the rebuttal PDF prove our preliminary achievement.
**Q2: Scalability.**
**A2:** We acknowledge that 3D data is crucial for the scalability of conditional 3D shape generation tasks. Fortunately, another large-scale benchmark, Objaverse, which is ten times the entire Shapnet dataset, has been released. Since we have developed an autonomous pipeline (mentioned in the global response part) for producing shape-image-text triplet, described below, we could quickly start training on the new dataset. Moreover, we have some preliminary achievements, shown in Figure.1 in the rebuttal PDF.
**Q3: Clarification on Table 2.**
**A3:** Before explaining the content of Table 2, we clarify the typo in Table 2, as the repeated SI-S (ULIP) and SI-S (SITA) under 'Text-Conditioned' should be ST-S (ULIP) and ST-S (SITA). Thanks for the correction.
We propose two new metrics to evaluate the conditional 3D shape generative models. The first is the shape-image score (SI-S), measuring the similarity between image conditions and generated 3D shapes. In particular, we use an image encoder to convert the image condition into an image embedding and a 3D shape encoder to transform the sampled 3D shape into a shape embedding. The SI-S is defined as the Cosine Similarity of the extracted shape and image embeddings. Another metric is the shape-text score (ST-S), which measures the similarity between text conditions and generated 3D shapes and implements a similar approach to SI-S. During the evaluation, although the 3D encoders in the SITA-VAE could extract 3D shape embeddings aligned to CLIP space, as the reviewer points out, only employing the 3D shape encoders from SITA-VAE to compare the generated results is unfair since ASLDM trains on the latent space of SITA-VAE. Therefore, for a comprehensive evaluation and comparison, we utilize an additional 3D shape encoder from ULIP, which pre-trains for aligning to the CLIP space. In detail, SI-S (SITA) and ST-S (SITA) indicate the score computed with the 3D encoder from SITA-VAE, and SI-S (ULIP) and ST-S (ULIP) indicate the score calculated with the 3D encoder from ULIP. The result in Table 2 proves that our model performs better under both ULIP and SITA-VAE.
**Q4: Nearest Neighbor of the generated results from datasets.**
**A4:** Please refer to the visual demos in Figure.3 in the rebuttal PDF, which illustrates the Top 3 nearest neighbors of the generated 3D shapes from the **training** dataset. The visual demos indicate that our model can generate novel shapes based on the learned 3D shapes in the training set rather than retrieve or memorize a specific shape.
**Q5: Clarification on Marching Cubes.**
**A5:** We apologize for the misleading due to our negligence while drawing the pipeline. The direct output from the 3D shape decoder is the occupancy of the query point, and the mesh is produced via conducting the marching cube algorithm on the occupancy of sampled volume points. Thanks for the rectification, and we will add a legend that indicates the marching cubes. Moreover, the abuse of 'mesh' and '3D neural shape' is somewhat confounded in our paper. In the revision, we will correct the inappropriate words.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough replies and the effort of additional experiments. Most of my concerns are resolved and I am happy to adjust the rating to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for the insightful comments and positive feedback!
Comment: Thanks to reviewer for recognizing our response and additional experiments in the rebuttal PDF. We are glad about the favorable assessment of our paper. We will rearrange the nearest neighbor experiments and the clarification on Marching Cubes in the revision. | Summary: This paper proposed a VAE-based text-to-3D shape generation method. The authors designed an alignment-before-generation approach to narrow the gap between 3D shapes and the 2D or text condition. They first train a Shape-Image-Text-Aligned Variational Auto-Encoder to align the representations between the 3D shapes and the 2D or text inputs. Then, the use latent diffusion model to denoise the shape embeddings to match the conditions. The overall presentation is good and the experiments are extensive.
Strengths: 1. The narratives are good and sound.
2. The SITA-VAE helps to align the representations between the three modalities.
3. The generation only requires some denoising steps on the latent embeddings.
4. The experiments are extensive and the authors provide sufficient visual demos and show the effectiveness of the method.
Weaknesses: 1. It seems that shape-image-text align training requires a lot of paired shape-image-text data, which could be a huge challenge in the generalization to new categories.
2. The experimental implementation is not clear enough and could be improved.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Since this method is trained with ground truth 3D data, which limited its generalization ability, would it be better to show some failure cases when the condition fails to lie in the aligned representation space?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The author mentioned the limitation of the requirement of the 3D training data, which could be a huge problem in the generalization of this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to reviewer fo3T for the positive feedback and thoughtful comments, and we are encouraged that the reviewer recognizes our effort on visual demonstrations and enjoys reading the manuscript. In the following, we reply to the individual concerns in three aspects.
**Q1: What if the condition fails to lie in the aligned representation space?**
**A1:**
1. We additionally show results that conditions on the MVImgNet images contain different image styles and some unseen categories in Figure.3 in the rebuttal PDF. The visual demos validate the generalization ability of our model.
2. Alignment-before-generation enhances the generalization ability of our model to some extent, as the CLIP latent space is a highly expressive and semantically rich space that captures both visual and textual information. By aligning the 3D space to the CLIP latent space, the model can utilize the semantic information captured in the CLIP space to close the semantic distance among the 3D latent. Benefiting from this, when the condition is unseen data to our training set, the model at least generates a semantically coherent 3D shape latent.
3. We employ the feature of the last layer from the CLIP encoder rather than the final embeddings of it. The last hidden layer tokens capture global and local information from the conditions, leading the model learning to match the conditions to the target 3D shape latent in terms of global semantic information and aspects of the local semantic parts. Therefore, when the global condition information fails to lie in the aligned representation space, some local parts always lie in the space, which facilitates the model's generalization ability.
4. We acknowledge that generalization is a significant challenge, highlighting the need for more efficient methods and larger-scale 3D datasets from the entire research community. In this work, we propose the alignment-before-generation approach, which enhances the generalization ability of our model. Additionally, the recently released Objaverse dataset is valuable for developing more effective 3D shape generation models. The continued efforts of the research community in this direction will lead to significant advancements in the field of 3D shape generation.
**Q2: Shape-image-text align training requires a lot of paired shape-image-text data, which could be a huge challenge in generalizing to new categories.**
**A2:**
1. **Data preparation:** Please refer to global response.
2. **Generalization to new categories:** Our model is the first that performs image-/text-conditioned generation on the whole Shapenet dataset and shows potent generative ability on some common categories after training on the aligned latent space from the entire Shapenet dataset. Moreover, benefiting from the autonomous data pre-processing procedure, we could quickly scale the training data triplet on the recent 3D shape benchmark. Moreover, we find it practicable to fine-tune the model on an enlarged dataset containing the Shapenet dataset, Objaverse, and some other released 3D datasets. We also show some visual results in Figure.1 in the rebuttal PDF. Our model shows vast potential to scale up as a foundation model for conditional 3D shape generation tasks.
**Q3: Experimental Implementations.**
**A3:** Since we present **Model Implementations** in *global response* and generations comparison details in **Q3: Clarification on Table 2.** of response to *Reviewer rVoM*. We present the **ablation study** supplementary.
We ablation study our method in several aspects.
1. The effectiveness of training the generative model in the aligned space. Figure.4 from the main text compares the results of two generative models under the same conditions. Our ASLDM produced the upper results, and the lower results produced by the generative model that trains in a single 3D shape latent space.
2. The effectiveness of the vision-language model (VLM). We ablation study the efficacy of the VLM on four tasks and two VLM models, SLIP and CLIP. We compare three VAEs, training without contrastive loss, training with an image-encoder and text-encoder from SLIP, and training with an image-encoder and text-encoder from CLIP.
a. The first is reconstruction. We use 3D shape encoders from three VAEs to conduct the shape reconstruction tasks on the testing set of Shapenet.
b. The second is Standard Classification. We use the 3D shape encoders and text encoders from three VAEs to perform a classification task on the testing set of Shapenet. As there are 55 categories in Shapenet, we set 55 texts with a template: a 3D model of (*), where * indicates a category. After computing the Cosine Similarity between a given 3D shape and the 55 texts, we regard the category with the highest Cosine Similarity as the classification results.
c. The third and the fourth are Zero-shot Classifications, similar to the second task, but evaluating the testing set of ModelNet, which contains 40 categories. Specifically, we compute a Top 1 accuracy and Top 5 accuracy. When calculating the Top 1 accuracy, we only record the highest Cosine Similarity with the category corresponding to the 3D shapes as successful classification. Moreover, it is correct to compute the Top 5 accuracy if the category conforms to the 3D shape that comes in the Top 5 Cosine Similarity.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Please look over the author response and the other reviews and update your opinion. Please ask the authors if you have additional questions before the end of the discussion period.
---
Rebuttal Comment 1.2:
Comment: Thank you for the detailed reply. I will keep my rating.
---
Reply to Comment 1.2.1:
Title: Thanks for the thoughtful comments and positive feedback!
Comment: Thanks for recognizing our response. We are glad about the favorable assessment of our paper and will rearrange the implementation details in the revision. | Rebuttal 1:
Rebuttal: # To All Reviewers:
We express our gratitude and appreciation to all the reviewers contributing to the review process. The reviewers, fo3T, rVoM, XFVg, and 73Gb, have commended the paper for:
1. well-written presentation.
2. Good visual demos.
3. Solid technical foundation.
We are also thankful for their insightful comments, which have provided valuable feedback to enhance our work and generate promising ideas for future research. We have endeavored to respond to the reviewers' queries and remarks with utmost clarity and detail in our individual replies. If the paper is accepted, we will restructure the final camera-ready version and incorporate more discussion into the main text.
**Motivation and novelty (Reviewer fo3T, rVoM, XFVg).**
We present a novel alignment-before-generation approach for conditional 3D shape generation tasks, showcasing the shape-image-text-aligned space's effectiveness for cross-modality 3D shape generation.
1. Compared to the previous 3D shape representation methods characterizing the geometric information of each 3D shape only, we investigate a 3D shape representation that further brings the semantic information into the neural fields. Meanwhile, the 3D shape representation is decodable.
2. Compared to the previous 3D shape generation methods training on the partial Shapenet dataset, the proposed Aligned Shape Latent Diffusion Model (ASLDM) successfully trains on the composition of the whole Shapenet dataset and 3D Cartoon Monster dataset based on the proposed 3D shape representation.
**What does this work achieve:**
The contribution of this work lies in training an image-/text-conditioned 3D shape generative model on the composition of the whole Shapenet and 3D Cartoon Monster dataset, which is the first of its kind. The generated results conform to the given conditions, demonstrating the effectiveness of the proposed approach.
**What does this paper propose:**
This paper proposes an alignment-before-generation approach to tackle conditional generative models' challenges in producing consistent 3D shapes with the given conditions.
**What does this project mean:**
This project reveals a fundamental but systematic pipeline of 3D shape generation tasks. The proposed alignment-before-generation approach enhances and stabilizes the training process of the generative model, and the introduced scalable data pre-processing scheme manages to create Shape-Image-Text triplets based on existing and potential large-scale 3D shape benchmarks.
**What does the current ASLDM differ from that at the paper submission:**
Benefiting from the release of more extensive scale benchmarks, we scaled up the ASLDM, resulting in improved performance.
**Data preparation:**
During the data pre-processing, we build shape-image-text triplets to exploit large-scale models' power. We especially employ the depth-condition-based ControlNet to render images under different views. We design some render prompts for rendering 3D shapes into corresponding images via its different views' depth images. The images and render prompts are then collected as the paired images and texts. Moreover, the texts are collected in three manners:
1. The category and descriptions of the shape from the Shapenet dataset.
2. The render prompts mentioned above closely pair with rendered images.
3. Followed by the text template in recent works, ULIP.
**Model Implementation**:
1. Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE).
a. SITA-VAE implements perceiver-based transformer architecture. The 3D shape encoder consists of 1 cross-attention block and eight self-attention blocks. All the attention modules consist of multi-head attention (with 12 heads and 64 dimensions of each head), layer normalization (Pre-Norm), Feed-Forward Network (with 3072 dimensions), and GELU activation.
b. The learnable query embeddings (with 513 tokens and 768 dimensions of each token) consist of one head-class token and 512 shape tokens. The shape tokens will be projected into VAE space (with 512 tokens and 64 dimensions of each token ) after processing by the 3D shape encoder, and another projection layer transforms the sampled latent back (with 768 dimensions) for the 3D shape reconstruction. We compute the multi-modal contrastive losses on the head-class token and the KL-divergence loss on the shape tokens.
c. The 3D shape decoder (neural field decoder) has 16 self-attention blocks with a final cross-attention block for the implicit neural field modeling. All the attention modules consist of multi-head attention (with 12 heads and 64 dimensions of each head), layer normalization (Pre-Norm), Feed-Forward Network (with 3072 dimensions), and GELU activation. Given a query 3D point and its corresponding shape tokens, the decoder computes self-attention iterative and cross-attention for predicting the occupancy of the query point. We compute the binary cross-entropy loss on the result and ground-truth.
d. Image-/text-encoder from CLIP (ViT-L-14) in SITA-VAE are frozen during training and inference.
e. During the training phase, we follow 3DILG, which normalizes all mesh into [-1,1] first and then separately and equally samples volume points (randomly sampled) and near-surface points with ground-truth inside/outside labels from the watertight mesh.
2. Aligned Shape Latent Diffusion Model (ASLDM)
a. ASLDM builds on a UNet-like transformer architecture. It consists of 13 self-attention blocks with skip-connection between shallow and deep layers. All the attention modules consist of multi-head attention (with 12 heads and 64 dimensions of each head), Layer Normalization (Pre-Norm), and Feed-Forward Network (with 3072 dimensions).
b. The ASLDM trains with the MSE loss and . The training diffusion steps are 1000, and beta \in [0.00085,0.012] with a scaled linear scheduler.
Both module uses an AdamW-based gradient decent optimizer with a 1e-4 learning rate.
Pdf: /pdf/73dafcc713f808ab006f5defffad65d9f02f934c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Text Alignment Is An Efficient Unified Model for Massive NLP Tasks | Accept (poster) | Summary: While next word prediction produces task-general models that can do a wide variety of tasks when prompted, it is not an efficient formulation in that it requires very large models. On the other hand, fine-tuned models achieve higher performance at smaller sizes but are specific to a few tasks. This paper proposes text alignment as a middle ground that encompasses a wide range of tasks while allowing for smaller models than next word prediction. Concretely, they convert 28 datasets (encompassing tasks like entailment, IR, QA, coref, and consistency) into the text alignment format and fine-tune RoBERTa on them. The resulting model outperforms much larger models (that are instruction-finetuned), as well as RoBERTa with task-specific fine-tuning.
Strengths: (1) The idea of using text alignment as a task-general interface is interesting and seems useful for producing useful task-general models at smaller sizes.
(2) The experiments are extremely thorough, and the method performs well across the board.
(3) The paper is well-written and clear.
Weaknesses: While this section contains some suggested experiments, I support the acceptance of this paper regardless of whether or not they are run during the rebuttal period.
(1) While the paper frames alignment as being more general than multi-task finetuning, they only evaluate the model on tasks seen during alignment finetuning. Indeed, as shown in the task ablations in Table 5, it seems that the model can only do tasks that are included during training. Therefore, I wonder how different the model is from simply doing multi-task fine-tuning, and what clear advantages are provided by using a unified interface. While already strong, I think the results would be even stronger if there were examples of the model doing unseen tasks.
(2) Related to questions of how general alignment is, the tasks included in the training and evaluation feel very close to entailment, which might be a key factor in enabling them all to use the same interface without task-specific heads. Therefore, I wonder how performance would be affected if tasks very far from entailment were included. (For example, you could include POS tagging, where x_1 is the original sentence and x_2 is the sentence with some or all of the words replaced by their part of speech.)
(3) Related to the above, while the paper claims that changing the interface from next word prediction to alignment allows for smaller models, I wonder if the smaller model sizes are simply a result of considering a narrower set of tasks than the instruction-finetuned models. I suspect that FLAN-T5 needs larger model sizes simply because it needs more capacity to do more tasks. One relevant ablation testing this question would be to take the same RoBERTa model and instruction-finetune it on the same 28 datasets as the alignment model.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (1) For handling longer contexts, I wonder if it would make sense to take min_j max_i f(x_1^i, x_2^j) instead of mean_j max_i f(x_1^i, x_2^j) as in the paper, with the interpretation that all of the facts in x_2 should be supported. The mean could fail if x_2 contains mostly supported statements, except for an egregiously unsupported statement in a single chunk.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Your recognition of our ALIGN framework, experiments and paper writing is truly encouraging. We also appreciate your list of suggested experiments.
**Performance on unseen tasks**
Despite we group some training and evaluation datasets into the same type, they can have significantly different formats. For example, the DREAM dataset (used for evaluation) contains QA samples with dialogues contexts, while in our training datasets the QA contexts are usually articles (no dialogues). The TRUE benchmark (factual consistency evaluation, Figure 4) also contains dialogue datasets and we do not have similar (dialogue) datasets in our training data.
As correctly pointed out by Bk1X, our ablation study indicates the inclusion of tasks (e.g., NLI, QA, etc.) during alignment training significantly boosts model performance on datasets of that task. However, as shown in the factual consistency evaluation use case (see Table 3 and Figure 4), our method outperforms existing approaches when good quality training data for the task (e.g., automated machine summarization evaluation) is limited.
We also included an additional comparison between our method and multitask learning in the general response. The experiment results suggest that our unified interface allows the model to better generalize to new datasets in the zero-shot setting.
**Alignment in relation to entailment**
Indeed, we consider entailment to be a special case of the more general definition of alignment. We note we don’t claim that alignment can be used to solve all NLP problems. We intentionally choose a set of related tasks such that they can be effectively modeled and learned using a single interface. Our experiments show this is a good tradeoff. That said, we are excited to explore if alignment can be generalized to more distant tasks such as POS tagging.
**smaller model, narrower tasks**
We would like to thank the reviewer for bringing up the ablation experiment. However, in this experiment, we decide to instruction fine-tune a a T5 model due to the following reasons: 1) Roberta is an encoder-based transformer model, and it does not undergo training involving next word prediction, 2) existing instruct finetuning methods are designed for decoder-based models such as T5 and PALM, 3) this choice will enable us to effectively compare the distinctions between the two training objectives: next word prediction and alignment. We use the T5-base (250M parameters) version to show whether a smaller instruction fine-tuned model could still perform well on the alignment tasks, compared with the alignment-based model.
We instruction-finetune the T5-base model on the same datasets as our alignment model. We don’t convert QA tasks since T5 naturally supports sequence generation. We follow the prompts mentioned in [1, 2] and format the datasets to feed the T5 model. The following tables are the result for the instruction finetuned T5-base.
Table 1. Comparison with the instruction finetuned next word prediction model on in-domain tasks
|Model\EvaluationTasks|NLI|FactVerification|STS|Paraphrase|QA|Coreference|Average|
|-|--|-|--|-|--|-|--|
|Alignment-base(125M)|**70.9**|**83.3**|**89.9**|**91.4**|**78.2**|**81.4**|**82.5**|
|Instruction-finetuned-T5(222M)|53.7|68.7|62.0|83.2|30.3|42.9|56.8|
Table 2. Comparison with the instruction finetuned next word prediction model on zero-shot tasks
|Dataset\Model|Alignment-base|Inst-FT-T5-base|
|-|-|-|
|AXB|**75.1**|65.3|
|AXG|**59.8**|50.8|
|CB|**76.8**|58.9|
|RTE|**83.4**|59.9|
|WNLI|**52.1**|47.9|
|SE14T1|**90.7**|48.5|
|MRPC|66.0|**66.1**|
|DREAM|**71.3**|35.8|
|Quartz|**59.7**|50.6|
|AVG|**70.5**|53.8|
The result shows the alignment-based model achieves better performance than the instruction finetuned model on the in-domain and zero-shot tasks, showing the effectiveness of the alignment model when training on a set of alignment tasks. Therefore, the good performance does not come from narrowing the tasks as the small instruction finetuned T5-base performs worse than the alignment model.
Reference
[1] Chung, Hyung Won, et al. "Scaling instruction-finetuned language models." arXiv preprint arXiv:2210.11416 (2022).
[2] Longpre, Shayne, et al. "The flan collection: Designing data and methods for effective instruction tuning." arXiv preprint arXiv:2301.13688 (2023).
**Aggregation method (min-max vs. mean-max)**
We agree that, if the model used for alignment estimation is perfect, using a min-max formulation as suggested by the reviewer would more accurately reflect the definition of alignment. However, in practice, the alignment estimator can be noisy, and we speculate taking the average helps remove some of that noise. In comparison, the min-max setup would be easily affected by underestimation. We also note mean-max aggregation is widely used in previous automatic evaluation work, such as SummaC [1] and SMART [2].
We have therefore experimented with the min-max aggregation method on the factual consistency tasks. The results are shown as follows:
||Alignment-base|-base-minmax|Alignment-large|-large-minmax|
|:-|:-:|:-:|:-:|:-:|
|SummaC|85.9|86.1|89.0|89.1|
|TRUE|**85.3**|84.2|**87.2**|86.2|
|Other-Pearson|**45.3**|43.7|**54.4**|52.1|
|Other-Spearman|42.0|**42.3**|**49.7**|49.2|
|Other-Kendall|33.1|33.1|**39.9**|38.8|
The bold number indicates a better performance among the same sized alignment models. It shows our mean-max aggregation outperforms min-max in most cases.
Reference
[1] Laban, Philippe, et al. "SummaC: Re-visiting NLI-based models for inconsistency detection in summarization." Transactions of the Association for Computational Linguistics 10 (2022): 163-177.
[2] Amplayo, Reinald Kim, et al. "SMART: sentences as basic units for text evaluation." arXiv preprint arXiv:2208.01030 (2022).
---
Rebuttal Comment 1.1:
Comment: Thanks for the very thorough response. All of my questions were answered. | Summary: This work proposed a text alignment model for a wide range of tasks that aims to measure the degree of alignment between their information. To be more specific, 5.9M examples from 28 datasets are used to fine-tune RoBERTa model. Experimental results show that the text alignment-enhanced model delivers comparable or superior performance compared to larger LMs, validating the effectiveness of the proposed method.
Strengths: 1. 5.9M examples from 28 datasets are extracted for LMs fine-tuning, and the experimental result on in-domain datasets (Table 1) and zero-shot setup (Table 2) demonstrate the effectiveness of Alignment-RoBERTa.
2. Experimental code has been submitted, benefitting future pre-fine-tuning research.
Weaknesses: 1. The previous pre-fine-tuning work proposed combining multiple losses with different weights [1], which diminishes the technical contribution of this work in terms of pre-fine-tuning.
2. Additional synthetic data is needed, increasing the complexity of the proposed pre-fine-tuning method. Also, the ablation study and corresponding are insufficient in terms of how much contribution synthetic data makes to the overall performance.
3. Experimental setup is not convincing because of the contamination of the training set (seen task) and test set (unseen task). Task clustering is needed to remove the concern.
[1] https://aclanthology.org/2021.emnlp-main.468.pdf
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: As for Flan models, there is a potential concern regarding contamination of the training set and test set, especially when it comes to the in-domain test, as shown in Table 1 and Figure 2. I understand you have seen and unseen datasets. However, the training and test sets may not have the exact same data points, but they share similar patterns or characteristics. In this case, things become very trivial: the improvement is just because your 5.9M contains similar data instances as in the test set. Given this context, I have two questions:
1. Did you experiment with an out-of-domain test setting for the alignment-based RoBERTa model? I'd like to know if you cluster the tasks to determine the task boundary, as many tasks have a high overlap.
2. Line 634 in the appendix: you mentioned for each dataset, we only use the first 500k samples. How did you determine the number "500k," and how will performance change if you vary this number?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors state the limitation regarding biases in the constructed dataset, and it is an inevitable issue in the training data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Previous Pre-finetuning Works**
We’d like to clarify that our text alignment method is **not** pre-fine-tuning. Instead, we develop a unified alignment model that is directly applied to a diverse set of tasks **without any additional finetuning**. As we pointed out in the related works section, our work differs from MUPPET (Aghajanyan et al.) in that we 1) use a unified definition of alignment instead of multitask learning , 2) share all model components across tasks and do not use dataset-specific heads, and 3) our model can be directly applied to different tasks without additional finetuning.
**Contribution of synthetic data**
The method we use to create synthetic data is easy and commonly used by many pretrained models (Kryściński et al., 2019; Deng et al., 2021). We also note that some popular datasets we use already contain synthetic samples (e.g., PAWS, DocNLI).
We acknowledge the need for further ablation study on the synthetic data and we thank the reviewer for the suggestion. The experiment results for removing the synthetic data are shown below.
Table 1: Performance comparison on NLU tasks
| Training \ Evaluation Tasks | NLI | Fact Verification | STS | Paraphrase | QA | Coreference | Average |
| --------------------------- | -------- | ----------------- | -------- | ---------- | -------- | ----------- | -------- |
| All tasks | **70.9** | **83.3** | 89.9 | 91.4 | 78.2 | 81.4 | 82.5 |
| -Synthetic | 70.4 | 83.1 | **90.1** | **92.0** | **78.6** | **83.1** | **82.9** |
The result shows that the synthetic data is not critical as it does not make a big difference to the model performance. The synthetic samples are included to provide a more comprehensive representation of the data that the alignment model may encounter. Therefore, the synthetic datasets are not a fundamental part in our alignment framework and they will not significantly increase the complexity of it (the synthetic samples are minor in the whole training set), which in turn shows the efficiency and simplicity of our proposed method.
**Test set contamination**
Even for tasks of the same type (e.g., QA), over evaluation datasets can have very different patterns from the training datasets. For instance, the DREAM question answering dataset we use for zero-shot evaluation includes dialogues as the QA context, while training QA datasets contain general QA samples (e.g., SQuAD v2). In our factual consistency evaluation use case, the evaluation benchmarks also contain task types not present in training data. For example, TRUE contains dialogue datasets that have very different characteristics compared to NLI, QA, etc. data used for training. In both cases, our experiment results indicate that our model generalizes better than the baselines.
**First 500k samples**
We limit the number of samples per training dataset to avoid extreme data imbalance between datasets as some of them are extremely large (please refer to Table 8 in the appendix for statistics). Our practice follows previous work (e.g. FLAN-T5 (Chung et al., 2022)) that also uses similar methods to mitigate the dominance of large datasets. Additionally this limit helps us save computational resources. It’s certainly possible that including more training data could lead to better performance, and we are happy to explore more in the future
Reference
[1] Kryściński, Wojciech, et al. "Evaluating the factual consistency of abstractive text summarization." arXiv preprint arXiv:1910.12840 (2019).
[2] Deng, Mingkai, et al. "Compression, transduction, and creation: A unified framework for evaluating natural language generation." arXiv preprint arXiv:2109.06379 (2021).
[3] Chung, Hyung Won, et al. "Scaling instruction-finetuned language models." arXiv preprint arXiv:2210.11416 (2022). https://arxiv.org/pdf/2210.11416.pdf, page 46
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. The term "Pre-fine-tuning" is used in the title of [1]. It herein refers to using 5.9M data instances (in your case) to further tune the model and test in the downstream tasks without additional task-specific fine-tuning. So I think I get this right.
Your answer partially addresses my concerns. However, you have not addressed the concern over test set contamination. In the previous FLAN paper [2], section 2 stated that "we group datasets into clusters by task type and hold out each task
cluster for evaluation while instruction tuning on all remaining clusters." The reason for doing this is straightforward: training on one QA dataset, for example, might help the model do better on another QA dataset. Therefore, they group all datasets into clusters by type of task and hold out not just the single QA dataset but the entire QA task cluster to which the dataset belongs.
In contrast, in your case, the QA datasets appear in both your training data (Table 1) and test data (Table 2). That's why I have this concern regarding training/test set contamination. The model may learn very simple superficial cues/patterns in the training set without literally understanding them. In short, seen/unseen datasets SHOULD be determined by task cluster instead of a single task.
[1] https://aclanthology.org/2021.emnlp-main.468.pdf
[2] https://arxiv.org/pdf/2109.01652.pdf
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback.
**Pre-fine-tuning**
We’d like to clarify that the term “pre-fine-tuning” as used in [1] does not refer to what you described. In particular, **“pre-fine-tuning” DOES require additional task-specific fine-tuning** before applying the “pre-fine-tuned” model to a downstream task. In contrast, as you pointed out, **our method** applies to downstream tasks **WITHOUT** additional task-specific fine-tuning. Our work is thus fundamentally different from “pre-fine-tuning” methods as in [1].
More specifically, the work [1] explicitly uses task-specific finetuning after “pre-fine-tuning”:
1) The MUPPET authors clearly state in the abstract they *“propose pre-finetuning, an additional large scale learning stage **between** language model **pre-training** and **fine-tuning**”*. Furthermore, in the experimental setup, they mention that *“We first show that pre-finetuning improves the representations of pre-training models. To do so, we **fine-tune** our pre-finetuned models on a large set of tasks”* and *“**Finetuning** Outside of Pre-Finetuning Domain”*. These statements highlight the need for finetuning in the pre-finetuning method, which our alignment model does not require.
2) In 3.5 Experimental Setup of the MUPPET paper, the authors mention that *“Every Sentence Prediction dataset gets a **separate** classification head, for Commonsense and MRC we utilize a separate unified head for each task.”* However, our alignment model does not necessitate assigning a specific head for each downstream task.
**Test Set Contamination**
**There could be different ways of “grouping” and distinguishing between seen and unseen tasks/datasets.** For example, the FLAN paper groups tasks/datasets based on the *task type* as you mentioned. Under this perspective, the task of *factual consistency evaluation* (i.e., predicting a factual consistency score, Section 4.2) is of a different type than any training tasks (e.g., QA), and our model shows substantial improvement over diverse baselines on as many as 23 datasets. On the other hand, as we mentioned in the initial response, one could also define unseen or out-of-domain tasks/datasets as those of different textual patterns/characteristics than training tasks/datasets. For example (as mentioned in the initial response), the dialogue datasets in *factual consistency evaluation* (Section 4.2) are of very different patterns/characteristics than training data (and they are also of different task types); the DREAM QA dataset in Section 4.1 also involves very different forms of context (i.e., dialogue) compared to the QA tasks in training. **In either way, our rich experimental results have shown that our approach does generalize to unseen tasks and show strong improvements over diverse baselines.** Reviewer Bk1X has commended our rich experiments, while both reviewer ayzf and 6RaC concur on our strong results. We will make this clearer.
**Reference**
[1] Aghajanyan, Armen, et al. "Muppet: Massive Multi-task Representations with Pre-Finetuning." Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021. | Summary: This paper leverages the fact that a lot of popular comparison based NLP tasks like entailment, paraphrase detection, semantic similarity judgement, multiple choice passage based QA etc. amount to learning a specific similarity function between two sets of sequences that is a proxy for “information alignment” between the two sets. Hence, this paper gathers together public datasets for many such tasks and with some light rule-based data augmentation results in 5.9 training examples from 28 datasets. Then a moderately sized RoBERTa model is finetuned on this big comparison-based dataset which is then compared to task specific models and larger models like FLAN T5 on many such datasets. This model has also been used as a metric for measuring factual consistency of NLG models like summarization models. The authors also use this model to detect questions that are unanswerable from the accompanying context to boost performance of systems on some QA datasets.
Strengths: – The paper is well-motivated and the large aggregated dataset and the model trained on it will be useful to the community for further study.
– This model outperforms larger general models like FLAN T5 and is competitive with task specific finetuned RoBERTa models on various semantic comparison tasks (some of them unseen during training) which shows the effectiveness of similarity between various such tasks.
– The results on summarization evaluation are promising and the usage of this model for identifying unanswerable questions is interesting.
– The ablation study shows an interesting trend indicating that various comparison-based datasets and tasks are very similar and compatible with each other. This raises interesting questions related to the nature of these tasks and datasets.
Weaknesses: – No comparison is made against specialized task-specific models. While fine-tuning RoBERTa on task-specific datasets is informative, a deeper insight into how the proposed model fares in comparison to more focused task-specific models will strengthen the comparison.
– While this paper focuses on comparison and understanding tasks, it is compared to larger generative models that are specially designed for natural language generation. While these models show impressive performance on these understanding/comparison tasks, a more informative comparison would be against larger encoder-based models that are specifically trained to do well on these datasets and benchmarks like SuperGLUE.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparison with task-specific models**
The goal of our work is to design a model that can perform well on a range of tasks without further task-specific fine-tuning. As a result, we compare with LLM that can be used in a similar way (do not require fine-tuning) and include the task-specific fine-tuned RoBERTa baseline as a sanity check. To the best of our knowledge, the results reported on leaderboards such as SuperGLUE are for models specifically fine-tuned on each of the datasets in the benchmark, making them not suitable for a direct comparison with our method.
**Comparison with large encoder-based models**
There are few large encoder-only language models, and we compare with the most well known large encoder-based model Megatron BERT using results reported by Shoeybi et al. (2020). Our alignment model has similar performance compared to Megatron BERT of a similar size, but the alignment model does **not** require further fine-tuning, while Megatron BERT does. We also note as the parameter count of Megatron BERT increases, its performance improvement is limited. This diminishing return is also one of the reasons that we opt to use a smaller model when it comes to the generality-vs-efficiency tradeoff.
| | Megatron | Megatron | Megatron | Alignment base | Alignment large |
|---------|----------|----------|----------|----------------|-----------------|
| size | 336M | 1.3B | 3.9B | 125M | 355M |
| MNLI-m | 89.7 | 90.9 | 91.4 | 87.82 | 90.34 |
| MNLI-mm | 90.0 | 91.0 | 91.4 | 87.54 | 90.31 |
| QQP | 92.3 | 92.6 | 92.7 | 90.07 | 91.27 |
---
Rebuttal Comment 1.1:
Title: thanks for the rebuttal
Comment: The author response was informative and i am keeping my current score. | Summary: This paper proposes a way to cast a variety of classification tasks into a single text alignment task. The authors found out that using the text alignment task could generate better results on certain downstream tasks, compared to Flan-T5 and GPT-3.5.
Strengths: The paper presents a novel approach by framing everything as an alignment task. The results are good, and I think the verifier results are interesting.
Weaknesses: I understand that the text-alignment model is not well-suited for generative tasks. Therefore, I believe that the conclusion of the paper is a bit unfair to Flan T5 and Gpt-3, which can be used for generative tasks. It is well-known that sometimes specialized small models can be performed better than by much larger general models. Somehow, the proposed method sounds like another trade-off. I think the proposed method is reasonable, but I believe that further research into different trade-offs would be beneficial.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Also, more baseline should be compared to. If the model just train the the same subset of the task without casting them into alignment model, will the model still perform well?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The proposed model cannot handle generative tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **More baseline - multitask learning**
We add comparison with multitask learning as suggested, where the model just trains with exactly the same set of tasks without casting into alignment format. Please see the General Response for results. The results show our alignment model has better performance than the conventional multi-task learning, along with other advantages.
**Trade-offs: small vs large & general, discriminative vs generative**
Our decision to use a small discriminative model is intentional, and we have explicitly discussed this generality-vs-efficiency tradeoff in the introduction section. One of our contributions is to propose a concrete approach in the spectrum of the generality-vs-efficiency tradeoff. We have shown through experiments that this trade-off is desirable as it achieves better performance on a significant set of tasks, with smaller parameter counts than other more general models, leading to a model suitable for many interesting applications (its usefulness is also acknowledged by reviewers Bk1x and 6Rac). The ability to handle generative tasks (while interesting in its own right) is outside of the scope of our definition of alignment. | Rebuttal 1:
Rebuttal: We thank all reviewers for your thoughtful and positive comments. We're encouraged by your appreciation that our text alignment framework is well-motivated (6RaC), novel (ayzf), and interesting (Bk1X); the resulting model has strong performance (ayzf, 6RaC, zhhb), enables interesting use cases (6RaC), and is useful to the community (6RaC, zhhb, Bk1X); the experiments are extremely thorough (Bk1X), and reveal insights about the nature of comparison-based tasks (6RaC); and lastly, the paper is well-written and easy to follow (Bk1X).
**Here we address the concern of comparison with multitask learning**:
### Comparison with multitask learning
We add comparison with multitask learning and show the advantage of our unified alignment framework (we thank the reviewers for the suggestion). Specifically, to obtain the multitask-learning model, we follow the popular multi-task learning work (Muppet, Aghajanyan et al., 2021), and train the same base model with the same set of tasks/datasets as for our alignment model. Notably, different from our alignment model that uses a unified interface to accommodate all diverse tasks, the conventional multitask-learning model learns separate prediction heads for different tasks.
Results show our alignment model outperforms the multitask learning model on zero-shot datasets, while having similar performance on in-domain datasets. This suggests that our definition of alignment indeed helps the model better generalize to out-of-domain problems.
**Experiment results**
**(1)** The table below shows the **results for in-domain datasets**. The alignment model and the multitask learning model have roughly the same performance.
| Dataset | Multitask base | Alignment base |
|---|---:|---:|
| MNLI-mm | 87.61 | 87.54 |
| MNLI-m | 87.68 | 87.82 |
| ANLI-1 | 65.10 | 65.30 |
| ANLI-2 | 48.20 | 48.70 |
| ANLI-3 | 46.17 | 45.50 |
| SNLI | 91.33 | 90.78 |
| NLI-FEVER | 76.50 | 76.78 |
| VitaminC | 89.82 | 89.79 |
| SICK | 89.01 | 90.71 |
| STSB | 87.86 | 89.03 |
| PAWS | 93.94 | 92.33 |
| PAWS-QQP | 92.32 | 91.88 |
| QQP | 90.66 | 90.07 |
| Multi-RC | 83.83 | 82.20 |
| BoolQ | 81.32 | 81.07 |
| SciQ | 92.10 | 92.40 |
| GAP | 81.65 | 81.35 |
| **Average** | **81.48** | **81.37** |
**(2)** The table below shows the **results in the zero-shot setting**. For the multitask-learning model and each zero-shot evaluation task, we use a set of “reasonable” heads obtained during training for prediction, and we report the best, worst, and average performance among these heads. For NLI, QA, and paraphrase evaluation datasets, the “reasonable” heads are all heads trained with NLI, QA, and paraphrase detection datasets, respectively.
As shown in the table below, the alignment model outperforms the average performance of the “reasonable” multitask learning prediction heads, and the multitask learning model only slightly outperforms the alignment model if we **cherry-pick** the best performing head for each zero-shot dataset. In a strict zero-shot setting, as we can’t use evaluation data for head selection, and the best performing head varies across evaluation datasets, the reported “best head” performance is an unrealistic upper bound. Thus the results indicate the alignment model has an advantage over multitask learning when applied to new datasets.
| Dataset | Multitask base (avg. of heads) | Multitask base (best head) | Multitask base (worst head) | Alignment base |
|---|:---:|:---:|:---:|:---:|
| AXB | 74.94 | 76.18 | 72.37 | 75.09 |
| AXG | 63.58 | 65.73 | 60.39 | 59.83 |
| CB | 80.00 | 83.93 | 76.79 | 76.79 |
| RTE | 81.41 | 81.95 | 81.23 | 83.39 |
| WNLI | 56.34 | 60.56 | 47.89 | 52.11 |
| SE14T1 | 57.86 | 58.45 | 57.17 | 90.72 |
| MRPC | 69.60 | 71.19 | 68.23 | 65.97 |
| DREAM | 67.04 | 73.98 | 57.03 | 71.34 |
| Quartz | 59.09 | 63.52 | 54.46 | 59.69 |
| **Average** | **67.76** | **70.61** | **63.95** | **70.55** |
**In addition** to the above stronger performance in the zero-shot setting, there are two notable advantages of our proposed method over multitask learning:
1. Our approach casts all the diverse tasks into a unified alignment format, which allows us to leverage these datasets naturally. In comparison, it’s not straightforward for a discriminative multitask-learning model to train on some of the datasets in our alignment data. For example, a multitask-learning model cannot easily train on open-ended QA datasets where the ground truth answers do not exist in the context. For fair comparison, we reuse our converted alignment version of those datasets in multitask learning.
2. As the multitask-learning model uses task-specific heads, it cannot be straightforwardly applied to new out-of-domain datasets. In the out-of-domain (zero-shot) setting, for each of the new dataset we have to find the most “similar” training task and use the associated head for inference, which can be difficult when data is limited. In comparison, our alignment model has unified prediction heads across tasks and datasets, making it easy to use the model for new datasets.
We’ll include the above results and articulate all experimental details in the revised version of the paper. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Asymptotics of Bayesian Uncertainty Estimation in Random Features Regression | Accept (poster) | Summary: The authors study random feature (RF) regression in the high-dimensional setting.
They focus on comparing the variance of the Bayesian posterior distributions (a measure of uncertaintiy in the Bayesian setting) to the variance of the maximum-a-posteriori estimator (a measure of uncertainty in the frequentist sense, and the generalisation error, already studied in previous work), motivated by previous work showing a discrepancy in the two quantities in the high-dimensional setting. They ask whether the two quantities have the same high-dimensional asymptotics.
Tha authors compute the high-dimensional asymptotic of the variance of the Bayesian posterior distribution.
They find that it equals the variance of the maximum-a-posteriori estimator only in the region where the optimal L2 regularisation is not vanishing in the strongly overparametrised case, and that they always match in the large sample complexity regime, close to the classical low-dimensional limit.
The authors study numerically the fluctuations of the two observables studied, and conjecture that they are Gaussian distributed, and such distribution have non-overlapping support around the interpolation threshold.
Comment:
The main proof seem a minor variation on reference "Mei and Montanari, 2022".
A reader familiar with that work (which I am not) should be able to easily tell whether the proof of the results of the authors is correct.
Strengths: - The authors compare Bayesian and frequentist notions of variance-like observables, computing a novel characterisation for one such variable. They also provide interesting numerical observations of the flucuations of such variables, proposing a number of new conjectures.
Weaknesses: - The paper is not badly written overall, but many details (see list below) could be improved.
- The authors provide no code to reproduce their figures.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: - The authors observe empirically and conjecture near-orthogonality of $R$ and $S^2$ at the interpolation threshold $N=n$. Is there any intuition why this should be the case? And what are the possible consequences of this fact?
## Suggestions for manuscript improvement
- this is more of a personal preference, but I like to number all equations in my manuscripts. This makes easier to readers (and reviewers) to refer to a given equation when providing feedback.
- Eq between line 103 and 104: $p(y | x, \Theta, a)$ is not clearly defined. Do the authors mean that this is the same distribution as the second line of the equation between line 95-96? This should be specified.
- Eq between line 105 and 106: missing subscript $i$ for the datapoint $x$.
- line 106: possibly wrong reference to Eq 5. Maybe the authors are referring to Eq 4, or want to say that the Gaussian measure is centered around $\hat{a}$.
- equation between line 113 and 114: the authors could derive this equation (even if trivial) either in the main text or in the appendix. For example, I do not see at first sight how to derive it by averaging over $x$ in Eq.8, which by Eq. 9 is the definition of $S_{RF}^2$. Line 115-116 seems to imply that indeed this equation is not derived from Eq. 8. Where does it come from? Over which Gaussian process is the average performed?
- line 123: where is the marginal likelihood defined? Where does Eq 10 comes from?
- line 183-190: this paragraph is not clear to me. Is "The second point" the full Proposition 2, or just point 2 of Proposition 2? Why does this suggest that there is not double descent? I only see monotonicity in the L2 regularisation parameter and boundedness.
- line 209-211: it is not apparent at first sight that in the second regime equation between line 207-208 and the one between line 176-177 coincide. Is there an easy way to see it? If not, this should be done explicitly, or better justified, somewhere.
- Figure 1: the caption is incomplete (and a legend is missing). One can guess what the circles and the black curve are, but it would be clearer if the authors wrote it explicitly. Also, the authors should specify how the error bars are computed.
- Figure 2: apart from the dashed line at 1, I see two curves here, orange and blue. It seems that each curve is for a different value of $\rho$, but it is not clear which one is which. I guess from lines 237-240 orange is smaller rho, but this should be clearly specified in a legend.
- line 241: Figure 1 shows concentration only for $S^2$, and not for $R$ for which simulations are not provided, contrary to what this phrase states.
- Figure 3: it would be clearer if the authors listed here all the sources of disorder causing the fluctuations of the quantities plotted
- line 285-287: doesn't the author find in Proposition 4, point 1, that there exists an overparametrised regime where $S^2 > R$, i.e. the posterior is unecessarily wide? Maybe I am misunderstanding this phrase.
- line 216: possible typo $+\alpha$
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: The authors discuss nicely limitations/shortcomings of their choice of observables to compute in Section 4.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
* We thank the reviewer's suggestions for manuscript improvements. We respond to specific items below.
* We have provided access to an anonymized code used to produce the Figures to the AC.
## Questions
* At the "interpolation threshold" of $N=n$, two main features are notable.
First, there exists a numerical instability for both quantities plotted that we believe (but do not exactly prove) is mainly driven by the near-non-invertibility when $N=n$. The justification is the literature on the minimum singular value of square random matrices (e.g., see Tao and Vu [2009]).
Second, the two risks appear "nearly orthogonal" in law, which was a crucial finding of Freedman in the Gaussian sequence estimation problem. We do not have a theoretical explanation as to why this regime is important in observing this phenomenon. As remarked in the manuscript, second-order asymptotics of the two risks seem key to understanding it.
## Suggestions for manuscript improvement
1. We agree with the reviewer's point about numbering. All equations will be later renumbered.
2. The distributions specified in lines 95-6 also should be denoted as $p(\boldsymbol{a})$ and $p(\boldsymbol{y}|\boldsymbol{X,\Theta,a})$, as suggested. These will be made more explicit.
3. Line 106 is meant to read "centered around $\widehat{\boldsymbol{a}}$" and *not* $\widehat{\boldsymbol{f}}$. This will be corrected.
4. We will incorporate the suggested change.
5. The decomposition of (9) follows from (8), distributing $\phi^{-1}$ inside the parentheses, along with the posterior on the weights given above line 106. The weight posterior induces a posterior distribution on the Gaussian process $\{f(\boldsymbol{x}\}$ indexed by $\boldsymbol{x}$ uniformly drawn from the sphere, centered around $\widehat{f}$. The expectation in (9) is the expectation over this posterior Gaussian process, as stated in lines 110-1. The $L^2$ notation is an abbreviation of averaging over $\boldsymbol{x}$. We will change it into $L^2(\mathbb{S}^{d-1}(\sqrt{d}))$, so that it is more strongly suggestive.
6. The marginal likelihood is not explicitly stated, but derivable as that of a Gaussian distribution induced by the prior on weights $\boldsymbol{a}$ and the Gaussian likelihood model (referred to in item 2 above).
7. "Second point" indicates **bullet 2** of Proposition 2. By "no double descent," we indicate that there is "no blow-up of risk that separates the underparameterized and overparameterized regime." In this sense, it cannot happen because of the boundedness.
8. Lines 209-211 summarize in words the mathematical statement that follows (Proposition 4). It is not meant to be immediate at first sight and is proven in the supplement. We will add a sentence pointing the reader to the proof at the camera-ready stage.
9. Figures with appropriate legends included are now in the PDF. The error bars plot 1 standard error interval around the average of numerical simulations at each point of $\psi_1$. We will incorporate the Figures and changes in the caption into the manuscript at the camera-ready stage.
10. The point is addressed by adding legends in Figure 2.
11. As pointed out, numerical simulations are not explicitly shown for $R_{RF}$. Concentration was already demonstrated in Figure 1, Mei and Montanari (2022).
12. We describe the sources of randomness contributing to the histograms in Figure 3. The randomness is due to a specific instantiation of the training data $(\boldsymbol{y,X,\Theta})$. The true data-generating mechanism, represented by function $f_d$, is fixed. The two risks for a given instantiation of the training data are averages over $\boldsymbol{x}$ uniformly drawn from the sphere, which in numerical simulations is approximated by Monte Carlo averaging. We will elaborate on the sources of randomness for interpreting plots at the camera-ready stage.
13. We are here addressing a general situation with not necessarily Gaussian prior on the weights and a possibly different likelihood. In this scenario, we expect qualitatively similar findings about the posterior distribution, but this is not exactly proven in the manuscript.
14. "$+\alpha$" is *not* a typo, but rather a notation to indicate that we are re-stating a theorem from the previous paper, along with our additional result. Because our point is to make a comparison between the asymptotics of two different quantities, it seems reasonable to not precisely separate the two statements.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my points, mainly concerning unclarity of the presentation and the lack of code for reproduction of numerical experiments and figures.
The majority of my points were satisfyingly addressed, and for this reason I am willing to update my initial grading from 4 to 5. | Summary: The authors compare in this paper the asymptotic posterior variance of the predictive risk in the random feature model associated with a Gaussian prior on the weights with the asymptotic \textit{frequentist risk} which had been derived by Mei and Montanari. The asymptotic here is in terms of the dimension d of the covariates, in a context where the number of features $N$ and the number of observations $n$ are comparable with $d$. This regime is often studied to understand the phenomenon of double descent.
They derive under the same assumptions as in Mei and Montanari the asymptotic behaviour of the posterior predictive risk and show that in the "wide" regime, i.e. when N/d converges to infinity with low signal noise ratio or when n/d goes to infinity both risks (under the optimal choice of the hyperparameter $\lambda$ for the frequentist risk) have the same limit while when N/d goes to infinity and the "SNR " $\rho$ is larger than some threshold then posterior variance is asymptotically larger than the frequentist risk which is 0. The authors then perform a simulation study to understand more precisely the role of $\lambda$ and the different asymptotic regimes.
Strengths: The comparison of the frequentist and the Bayesian approaches in the double descent types of regimes is a very natural question and treating the random feature model seems like the obvious first choice to attack the problem. The results derived by the authors seem correct and the simulations shed light on the results. The proofs are a consequence of the proofs of Mei and Montanari, but remain technical (at least the proof of Proposition 1).
The posterior variance , as studied by the authors is strongly related to the radius of credible regions therefore understanding if the posterior variance is similar to the frequentist risk is a first step to understand if credible balls are confidence balls, as hinted by the authors. As noted by the authors this is not a full statement on coverage of credible balls and higher order asymptotics is required to understand the frequentist coverage of credible balls.
Weaknesses: What I am not so clear about is what is the conclusion/ consequence of the results obtained in this paper. In particular, the \textit{frequentist} risk of Montanari is not really frequentist and in my opinion, it has already a Bayesian flavour. The risk $\|f_d - \hat f\|_2^2$ (which is in fact a loss function) is viewed in Montanari as stochastic both in terms of the data $X,Y$ appearing in $\hat f$ but also in $f_d$ (not to speak of the randomness $\Theta$) and its asymptotic behaviour is under this regime. Hence, it would be interesting to have a proper discussion of the different notions of risk which are considered in the paper. I also think that the authors do not comment enough on why in some regimes the two risks are equivalent and not in others. Some description in terms on $\tau$ (noise variance ) being large or small in the wide regime is provided but no real interpretation.
That being said I still find the results interesting, but I am uncertain after reading as of to what ot make of them.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper has a number of typos and inconsistencies which makes it a bit cryptic and which I will list below. In particular the notion of proba under which expectations re calculated are not always clearly stated . In the context of this paper it is particularly important to be precise in this because these expectations are somewhat non standard.
1. p4 line 110: The comment on the credible radius is cryptic or incorrect. By definition a credible radius depends on the coverage (Bayesian) of the credible ball. So it cannot be equal to the posterior variance for all coverage. This sentence makes no sense to me.
2. In the definition of the Bayesian model in line 95, the variance of the noise is denoted $\phi$ while in the previous section it was $\tau$. Is this a typo or is there some confusion there or is it that the model may also be mispecified in the variance component? If it is purely a notation inconsistency an $\phi $ also denotes the variance of the noise, then the paragraph in lines 118-123 is puzzling. It reads as if it was a quantity chosen by the analyst , while it is driven by the data. It is not completely clear to me why $\phi$ should go to 0 while in the subsequent computations $\tau$ is held fixed.
3. Proposition 1. What is $\mathbb C_+$? Also what does the sentence "and for expectation over the randomness of $(y, X, \Theta)$ mean? There is no expectation in the statement below.
4. Proposition 3: Recall what is the Expectation (with respect to which variables)
5. Do the activations functions considered in the simulated examples satisfy the assumptions ? Provide some examples which satisfy the assumptions.
6. What are the different curves in figure 2?
7. line 233 : What do you mean by " conservativeness of the credible ball " ?
8. What do you mean by "he posterior variance fluctuates like a Gaussian random variable with a variance strictly smaller than that of the frequentist risk"? (l 250) The posterior variance is deterministic in a Bayesian Gaussian - Gaussian model (as in Freedman).
9. What does the sentence l l286 mean : ", the posterior
286 predictive mean is left unchanged from that of the posterior, "? Similarly I don't understand the last sentence of that paragraph.
10. l310 : What do you mean by "interval of the function space posterior, $\hat f$"? abd by the variance of the posterior of $\hat f$ a few lines below?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have interesting remarks on the open questions raised by the paper and what are the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
- As pointed out, the risk $\mid f_d - \widehat{f}\mid^2$ appearing in Mei and Montanari's work is viewed as stochastic in the input. Even for a specific instantiation of the training data $(\boldsymbol{y,X,\Theta})$, however, the quantity depends on the "truth"
$f_d$ and is unknown to the researcher.
Mei and Montanari's theory provides an explanation of the average (and high probability) behavior of the risk of the learning procedure under certain assumptions on $f_d$.
The expected PPV investigated in the manuscript is computable for a given instantiation of the training data and provided as a statistic for UQ. Its asymptotic average behavior *does* depend on the assumptions we make on $f_d$.
Therefore, our analysis should be read as highlighting whether the "unknowable risk" and the "Bayesian's risk for UQ" behave similarly under the same technical assumptions on the data-generating mechanism. This was exactly done in the problem of infinite-length Gaussian denoising by Freedman, hence our repeated reference to his work.
- The interpretation of the role of $\tau$ is subtle. We elaborate a little on the left panel of Figure 2 for one such interpretation. The blue curve shows that for "near-noiseless'' problems, where there is little noise and the error primarily comes from approximation error with respect to $f_d$, the actual generalization risk decreases faster than the posterior predictive variance as the model becomes more overparameterized.
Thus, overparameterization is primarily beneficial for "noiseless" learning problems where the posterior mean provides a near-perfect prediction. This regime can be precisely pinned down by the phase transition of Mei and Montanari (Proposition 4). The Bayesian's posterior predictive distribution centered around the mean, roughly speaking, does not contract as it should with increasing dimensions, so uncertainty estimate based on its variance will be "conservative" than it should be for future predictions. The formulae suggest the possible existence of "sweet spots" of noise level, scaling with dimensions, where the ratio is nearly 1.
## Questions
1. We note that we did not provide a precise definition of the "credible ball,'' which led to the confusion. In our manuscript, *$(1-\alpha)$-\% credible ball* is simply synonymous with the $L^2$-ball centered around the posterior (predictive) mean containing $(1-\alpha)$\% posterior (predictive) mass, so it need not have the nominal coverage. We will explicitly define the term at the camera-ready stage to eliminate this confusion.
2. $\tau^2$ is the variance of the noise corrupting the observations. $\phi$ can be interpreted as essentially estimating $\tau^2$. Hence, the specific choice of (10) is well-justified. We used the more generic notation $\phi$ to emphasize that (10) is not the only choice, and that it must scale appropriately with the relevant dimensions $d,n,N$ for our asymptotics to be meaningful.
3. $\mathbb{C}_+$ is the upper half complex plane containing all complex numbers with positive imaginary part. The phrase "expectation over'' should be omitted, which we will do at the camera-ready stage.
4. Expectation is with respect to all random quantities needed to define $R_{RF}$; hence, it is an integral over the joint measure of $\boldsymbol{y,X,\Theta}$.
5. The main example, ReLU, satisfies the assumptions. Other examples include mixed-order polynomial functions.
6. The appropriate legends were omitted from Figures 1 and 2. In Figure 2, the two curves correspond respectively to the ratio described in the caption for high $\rho$ (blue) versus low $\rho$ (orange). New figures with legends have been included in the PDF attached to the general rebuttal.
7. This confusion, we believe, is also caused by our non-definition of the term "credible ball." Since the $L^2$-ball around the posterior mean need not have the nominal coverage, its "conservativeness" implies that its actual coverage is strictly larger than the posterior mass it contains.
8. As pointed out, the phrase "posterior variance'' is incorrect and should be revised. We would like to elaborate on what is meant by this phrase and why we make the comparison to Freedman's result.
In Freedman (1999), a simpler problem is studied, where an infinite-length sequence $\beta\in\mathbb{R}^\infty$ is observed with additive Gaussian noise (with known variance). The phrase "posterior variance'' must be changed into the "Bayesian pivot'' in equation (4), Freedman (1999): $\mid\mid\beta-\widehat\beta\mid\mid^2$,
where $\widehat{\beta}$ is the posterior mean of $\beta$ under a Gaussian prior. Unlike in our setting, Freedman's has essentially only two sources of randomness, contributed by a prior distribution and observational noise. Hence, the expectation of the "pivot'' with respect to the posterior of $\beta$ is deterministic. On the other hand, the expected PPV in our study is random, as it depends on a given instantiation of the training data $(\boldsymbol{y,X,\Theta})$.
The key feature we wanted to emphasize was the similarity between Figure 3 and the near-orthogonality of the laws of "frequentist'' and "Bayesian'' expectations of the pivot found by Freedman. While it would have been best to formally expand on the notations used by Freedman in his study, we could not due to lack of space and organization.
9. Line 286 means that the posterior predictive distribution, $p(y|\boldsymbol{x,y,X,\Theta})$, is centered around the same mean as the posterior distribution, $p(f(\boldsymbol{x})|\boldsymbol{x,y,X,\Theta})$. This need not be true in the case when there is no Gaussian-Gaussian conjugacy structure as in our manuscript.
10. Line 310 means that one may compare not the posterior predictive distribution of $f$, but its posterior, as described above.
---
Rebuttal Comment 1.1:
Title: with respect to true data generating process
Comment: I am still unclear about what is $f_d$. In your answer you say that the risk is frequentist and depends on f_d or rather on assumptions on f_d but your data generating model has a deterministic linear term and a stochastic non linear term $f_{NL,d}$. So what do the aurthors mean by $f_d$? the stochastic $f_{NL,d}$ makes the risk somewhat Bayesian, as some average over possible $f_{NL,d}$ functions.
---
Reply to Comment 1.1.1:
Title: Response to dJw1
Comment: Taken from Equation 1: the data generating function is $f_d(x) = \beta_{d,0} + \langle x,\beta_d\rangle + f_d^{NL}(x)$. In the asymptotic analysis, the stochasticity vs. non-stochasticity is not so important, as the fact that the coefficients are asymptotically "normalized" (Assumption 3). As pointed out, Mei and Montanari's risk analysis is applicable on average, so it has some Bayesian flavor. However, operationally, we do not know either an explicit form of $f_d$, which is the very function we want to learn, or the relevant coefficients $F_1,F_*,$ and $\tau$ appearing in the asymptotic formula. The hope is that the Bayesian PPV (which, as we pointed out, does not involve $f_d$) somehow behaves similarly enough to the risk, so that asymptotically it provides a good indicator, or estimator, of that risk. What we have tried to highlight is when that expectation can be frustrated and when not.
We hope to emphasize these points more at the camera-ready stage. Thank you for your astute questions and comments. | Summary: This paper considers the random feature model. Two objects are studied: (1) The posterior predictive distribution (in particular, the variance), and (2) The MAP estimator. This paper gives asymptotic formulas for these two quantities under the proportional regime. Comparison between these two quantities are made, in terms of both theory and experiments. Their results suggest that the posterior predictive summaries can be very different from that of the generalization error.
Strengths: This paper is among the few works that address the following problem: Are the Bayesian credible sets also valide confidence sets in the frequentist sense. This is true in the finite dimensional setting with n \to \infty, while unclear in the high-dimensional setting. They give precise expressions for the two targeted quantities: generalization error and posterior predictive variance. They give rigorous results to show that these two quantities agree and disagree in different regimes. The presentation is nice and clear. Extensive simulation is conducted to justify several unproven claims.
Weaknesses: 1. Their setting is a little bit restrictive: Only one specific prior structure is considered. Having said that, I understand that choosing this prior is for getting a precise asymptotic expression for the quantities of interest.
2. Their theoretical results seem like direct consequences of [Mei and Montanari 2022], perhaps the authors should highlight their technical contributions comparing to previous works. This is my major concern, and I will raise my score if this question is well addressed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Line 312: "worse an uncertainty estimate -> worse than uncertainty estimate"
2. It is not clear why they consider averaged posterior predictive variance versus generalization error. The authors did present an explanation in the end. However, this explanation, at least to me, is like simply stating "these two quantities are different and identical in several regimes". I think a more convincing argument should be provided.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: Limitations have been adressed, and societal impact not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
1. We agree that the Gaussian prior structure is restrictive. Incorporating more diverse priors (log-concave, scale mixture of Gaussian, etc.) is an interesting technical challenge. Extension to log-concave priors seems to be the natural immediate next step.
2. We agree that our manuscript relies heavily on the technical tools of Mei and Montanari (2020). Our main goal was not to add to this technical machinery, but to use it to ask a question about the confidence properties of Bayesian procedures that were not in the interest of previous authors.
We also identified new technical directions in the manuscript that will be pursued by the authors in future works. Overall, it is of interest that the technical tools of Mei and Montanari (2020) can be used to analyze Bayesian procedures and derive different asymptotic properties.
## Questions
1. Line 312 is *not* a typo. However, we will change the phrase to "an even worse uncertainty estimate," as it reads much more clearly without possible confusion.
2. We believe there are two angles on why expected PPV is a meaningful comparison point.
First, it is a knowable, computable quantity from the researcher's point of view without knowledge of the true data-generating mechanism $f_d$. The expected PPV investigated in the manuscript is computable for a given instantiation of the training data and provided as a statistic for UQ. Its asymptotic average behavior *does* depend on the assumptions we make on $f_d$.
Therefore, our analysis should be read as highlighting whether the two quantities, standing for Mei and Montanari's "unknowable risk" and the "Bayesian's risk for UQ," behave similarly under the same technical assumptions on the data generating mechanism.
Second, as we explain in the Discussion (lines 308-318), it is possible to either consider the posterior of $f$ induced by the weight posterior or the posterior predictive obtained by marginalizing out the weights. Both are Gaussian processes indexed by $\boldsymbol{x}$, which is randomly drawn. Other than the technical point explained in the manuscript, we also believe that prediction for future data is of high interest in most supervised learning tasks, and UQ should also target predictive uncertainty in this case. Hence, we choose the variance of the posterior predictive distribution as a comparison point for the generalization risk.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for the response! My previous concern was that the main technical tool largely overlaps with a previous work, while I agree that the perspective might be novel. Hence, I am updating my evaluation from 4 to 5. | Summary: The main focus of the paper is on the comparison between the posterior predictive distribution and the frequentist risk associated with the maximum a posteriori estimator for random features ridge regression model. The target function is assumed to be a sum of a linear model, a non-linear function given by a Gaussian process with specific kernel (see assumptions), and an iid noise model with mean zero, constant variance, and bounded fourth moment (see Eq. 1). The technical part starts with a review of frequentist risk and empirical estimator, given in Eq. (7). This is followed by a Bayesian model that assumes normal prior on the weights and likelihood model that conditions on the random features and weights of the linear model (see Section 2.2). The posterior distribution is the Gaussian centered at the maximum a posteriori estimator with covariance matrix given in Eq. (6). This is standard for the weight space derivation of Gaussian processes. The posterior predictive at an instance x is given in Eq. (8) and is just variance at that point. The expected posterior predictive (EPP) is given in Eq. (9) and is obtained by taking the expectation relative to the instance space. The paper then provides an asymptotic characterization of EPP relative to prior work by Mei & Montanari (2022) on frequentist risk. Numerical simulations illustrate the difference between the two concepts and show the lack of double-descent phenomena in the Bayesian setup.
Strengths: It is a nicely presented work with an interesting insight into random features and the differences between Bayesian and frequentist notions of uncertainty. The theoretical result is cumbersome to parse but still provides an interesting illustration and side-by-side comparison with Mei & Montanari (2022). Numerical simulations also support the theoretical conclusions. The discussion is also valuable, pointing at some of the shortcomings and directions for future research.
Weaknesses: The assumptions might be constraining the problem setting strongly but still the results provide an interesting insight on what happens asymptotically.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Are the instances $x_i$ from the interior of the d-dimensional sphere or from its surface ("around" is not precise enough)? If the latter, why is this assumption necessary and to what level it affects the results? Normally, one assumes that instances are contained within a ball of constant radius and this appears to be a bit stronger assumption.
In Eq. (1), I see $f_d^{NL}$ defined but not $f_d$. The latter appears in Eq. (7) and my understanding is that it refers to the function described in Eq. (1). Could you please do a pass over the notation?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
- We agree that the assumptions can be quite restrictive. We address the input distribution below.
Besides the input distribution, we also assume a Gaussian prior structure on the weights and a form of analytic expansion of both the learned function class and the activation function.
Extending the former is an interesting technical challenge. The case of log-concave priors is likely manageable with existing theoretical tools. Similarly, we also believe more technical work can be done to cover different likelihoods, such as that for logistic regression.
The latter assumption, while technical, does cover the ReLU activation, so it is flexible enough to allow non-smooth functions.
## Questions
- Inputs are distributed on the **surface** of the sphere (we will rephrase the sentence at the camera-ready stage). Input distribution can affect the results to some extent if it is not "generic'' enough.
For example, when it is Gaussian, one should expect in high dimensions many results should similarly hold due to the concentration of measure (e.g., see Cheng and Singer [2013]).
When there is dependence within input or if the input is distributed non-uniformly around the sphere/in the ball, it is harder to predict possible changes in the results.
- $f_d$ is, as pointed out, meant to be a fixed function used for generating data in equation (1). We will clarify this at the camera-ready stage. | Rebuttal 1:
Rebuttal: We thank all reviewers for the careful and constructive feedback. We list below a general re-emphasis of the main point of our work and propose changes at the camera-ready stage that directly address the reviewers' concerns.
## General Rebuttal
- Broadly speaking this paper examines what is similar and what is different between the MAP setting versus marginalizing the posterior. This question is of fundamental importance in understanding the difference between penalized frequentist procedures and fully Bayesian frameworks. There are two interesting results in this paper:
1. there is a difference between the two inference procedures;
2. the mathematical analysis in the paper by Mei and Montanari (2022) can be adapted to analyze a fully Bayesian procedure.
## Proposed changes at the camera-ready stage
- Mathematical notations and descriptions of the main propositions will be reviewed and revised according to the reviewers' suggestions.
- The appropriate legends have not been included in Figures 1 and 2. **We have included the revised Figures in the attached PDF file.** They will be incorporated into the manuscript later.
Pdf: /pdf/0f8e97468a787ccc79e1189ff1ed7c282962cf5d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias for Correlated Inputs | Accept (poster) | Summary: The authors analyze the dynamics and implicit bias of gradient flow with the square loss when learning a single ReLU neuron using a one-hidden-layer ReLU network. They assume that the training data are correlated with the teacher neuron (the angles are smaller than $\pi/4$), and that gradient flow starts from a small and balanced initialization. They give a non-asymptotic convergence analysis. In the limit where the initialization scale tends to zero, the resulting network has rank $1$. Namely, all non-zero neurons point in the direction of the teacher neuron. On the other hand, the resulting network might not have minimal Euclidean norm. Thus, there is an implicit bias for rank minimization but not for norm minimization.
Strengths: Understanding convergence and implicit bias in overparameterized networks is an important question, that has attracted much interest in recent years. The paper gives a detailed analysis of the trajectory and implicit bias. The analysis is under strong assumptions (single-neuron teacher, correlated training data, small and balanced initialization, etc.), but these assumptions are present also in existing results, and the analysis of gradient flow is challenging even under such assumptions. Finally, the paper is well-written.
Weaknesses: In Assumption 1:
- Item (iv): why is it a measure-zero event?
- Items (iv) and (v): I think that the assumptions should specify properties of the training data and the training algorithm. Then, the properties of the trajectory should be shown using these assumptions. Can you specify items (iv) and (v) as assumptions on the data+algorithm?
Other than that, I don’t have major concerns. An obvious limitation is the strong assumptions, and specifically the assumption on the correlated training data, but as I already mentioned, I think that it is reasonable here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the “weaknesses” section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > In Assumption 1:
> - Item (iv): why is it a measure-zero event?
> - Items (iv) and (v): I think that the assumptions should specify properties of the training data and the training algorithm. Then, the properties of the trajectory should be shown using these assumptions. Can you specify items (iv) and (v) as assumptions on the data+algorithm?
Regarding item (iv), a single yardstick trajectory at any time $t$ follows a direction $\mathbf{\gamma}\_S$ for some $S \subseteq [n]$. The set $S$ changes at most $n$ times, namely at the crossing of $\bigcup\_{i \in [n]} H\_i$, where $H\_i$ is the set of vectors orthogonal to the training point $\mathbf{x}\_i$. Consider a set of targets $H\_i \cap H\_k$, of dimension $d-2$. It can be reached by a straight-line trajectory from a convex polyhedron $P$ of dimension at most $d-1$. Thus, the previous change of direction occurs at a vector from $P \cap \bigcup_{i \in [n]} H\_i$, which is a polyhedron of dimension at most $d-2$ as well. Thus, in order for a vector to reach $\bigcup_{i < k} H\_i \cap H\_k$ by a yardstick trajectory, it must belong to a finite union of affine subspaces of dimension $d-1$. Therefore, if each $\mathbf{z}\_j$ at the initialisation is sampled from any absolutely continuous distribution on $\mathbb{R}^d$, the probability that Assumption 1 (iv) is violated is $0$.
Regarding item (v), we would say it is an assumption about the gradient flow, and it has featured in the literature, e.g. in Eberle et al. “Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation” ERA 31(5): 2519-2554, 2023, where on page 2535 it is paraphrased as “the set of all degenerate neurons of the GF solution at time $t \in [0, \infty)$ is non-decreasing in the time variable” (in fact, our Assumption 1 (v) is slightly weaker than that). The trajectories of the gradient flow that this assumption excludes cannot arise as limits of paths of gradient descent with any fixed value for the derivative of $\sigma$ (the non-linear ReLU function) at zero: if that value is $0$, then any deactivated neuron cannot activate subsequently even if there are data points exactly on its ReLU boundary; and if that value is greater than $0$, then in our setting the deactivation of any neuron would necessarily involve removing all data points from its ReLU boundary. Thus arguably these impractical trajectories might a priori exist only due to a quirk of the theoretical setup of the gradient flow.
We are happy to expand on these points in the main paper, and we shall provide in the appendix a more detailed proof of the measure zero for item (iv).
---
Rebuttal Comment 1.1:
Comment: Thanks for the response | Summary: The paper studies the problem of learning a single ReLU using a 2-layer ReLU network using gradient flow on both layers. The main assumption is that the data is correlated with the target neuron, while other milder assumptions are also used (e.g. specific initialization and spectral assumption on the data matrix). The main result is a two-phase convergence to a global minimum. Several experiments are also given.
Strengths: - The main convergence result is novel AFAIK, and shows an interesting convergence dynamic, where in the first phase the learned weights either align with the target neuron or deactivate, and in the second phase converge to a global minimum.
- The connection between the global minima of the problem and the minimal norm solution in Section 7 is interesting and brings forward the question of whether minimizing the empirical loss results in a minimal norm solution, which was studied in previous works too (e.g. Vardi et al 2022).
- The experimental part shows empirically the behavior of the angles between the learned weights and the target neuron.
Weaknesses: - The assumption that the data is correlated with the target neuron is pretty strong. The motivation for taking an angle of at most \pi/4 between each data point and the target neuron is also not clear. What changes if the angle is larger\smaller? I think the authors should elaborate more on this assumptions, and what breaks if it is not assumed. To compare, other papers about studying a single neuron usually consider a data distribution spread in all directions (e.g. Frei et al. 2020, Yehudai & Shamir 2020).
- The presentation of the main result is not clear. I think there should be a single Theorem stating the convergence result, with an explicit convergence rate. Currently, there is no single result just a lemma for each phase, and it is difficult to parse the main out-take of the paper. In such a paper, I think the convergence rate of the entire procedure is crucial to fully understand the quality of the result.
- The paper is very technical, and in my opinion, doesn’t provide enough intuition to understand the quantities that are used. For example, the definition of \delta in line 238. Can’t \delta be zero or at least exponentially small (if the angle between two data points is very close to \pi/2).
- The result in Section 7 is interesting but not quite clear. What does the quantity M represents? What can we say about the dataset itself so that either option (i) or (ii) of Theorem 8 is applied?
- I think that claiming that the implicit bias for the problem studied here is to minimize the norm is a bit misleading. As I understand it, all the learned weights either align with the target neuron or deactivate, this means that the solution converges to a specific form of rank-1 matrix (where each row is either v^* or 0).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - What changes if we assume that the angle between the data samples and the target neurons is \alpha, where \alpha > \pi/4?
- What is the total convergence rate of gradient flow for learning a single neuron?
- Is it possible to extend the result to gradient flow? or possibly SGD?
- Is there an explicit condition on the dataset that can be given so that the set of interpolators also minimizes the norm of the predictor
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors do address adequately to the limitations of the paper, although I think it is important to elaborate more on the main assumption about the dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [...] where in the first phase the learned weights either align with the target neuron or deactivate, [...]
Please see the first item in our response to reviewer PDpC.
> The assumption that the data is correlated with the target neuron is pretty strong. The motivation for taking an angle of at most $\pi/4$ between each data point and the target neuron is also not clear. What changes if the angle is larger\smaller? I think the authors should elaborate more on this assumptions, and what breaks if it is not assumed. [...]
Please see our response to all the reviewers jointly.
> The presentation of the main result is not clear. I think there should be a single Theorem stating the convergence result, with an explicit convergence rate. Currently, there is no single result just a lemma for each phase, and it is difficult to parse the main out-take of the paper. [...]
> What is the total convergence rate of gradient flow for learning a single neuron?
From Lemma 6, in the final stage of the training, the mean square empirical loss converges to zero at an exponential rate $L(\mathbf{\theta}^{T\_2+t}) = \exp(-\Omega(t\delta^4/\Delta^2))$, which is $\exp(-\Omega(t))$ for fixed $\delta$ and $\Delta$.
The time $T\_2$ to reach the final stage by Assumption 2 satisfies $T_2 = O((\ln m+n^2) d \Delta^4/(\delta^8\ln\delta))$, which is $O((\ln m+n^2)d)$ for fixed $\delta$ and $\Delta$. This reflects the lengthy escape from the saddle at the origin due to the small initialisation scale. (After submission, we succeeded in reducing the $n^2$ term here to $n\ln n$.)
We are happy to make these bounds clearer in the paper.
> The paper is very technical, and in my opinion, doesn’t provide enough intuition to understand the quantities that are used. For example, the definition of $\delta$ in line 238. Can’t $\delta$ be zero or at least exponentially small (if the angle between two data points is very close to $\pi/2$).
In our results, the dataset and the unscaled initialisation are given rather than sampled from particular distributions. This generality comes at the price of having the explicit parameters $\delta$ and $\Delta$, which otherwise typically would be replaced by lower and upper bounds (respectively) on measurements of the dataset and the unscaled initialisation that hold with high probabilities.
That $\delta$ is positive follows from Assumption 1, however indeed it may be arbitrarily small since in our worst-case approach the dataset is given by an adversary.
We remark that e.g. Boursier et al. regarded quantities like our $\delta$ and $\Delta$ as constants (see their Appendix B.2).
> The result in Section 7 is interesting but not quite clear. What does the quantity $\mathcal{M}$ represents? What can we say about the dataset itself so that either option (i) or (ii) of Theorem 8 is applied?
> Is there an explicit condition on the dataset that can be given so that the set of interpolators also minimizes the norm of the predictor
At present we do not have an alternative characterisation of the dichotomy shown in Theorem 8.
The following is a sufficient condition for option (i) to apply, i.e. for the set of rank-1 interpolators to also minimise the norm: the inverse of the Gram matrix of the dataset (in our setting this Gram matrix is positive) is a Z-matrix, i.e. the inner product of any two distinct rows of the inverse of the dataset matrix $\mathbf X$ is non-positive.
As we point out in Remark 37 (ii) in Appendix H, option (i) of Theorem 8 also always holds in dimension $d=2$.
That option (ii) may occur is related to (in fact, it implies) the known fact that, for $d>2$, it is not the case that the inverse of every non-singular symmetric non-negative $d \times d$ matrix is a Z-matrix (see e.g. Markham “Nonnegative matrices whose inverses are M-matrices" Proc. Am. Math. Soc. 36 (2), 326-330, 1972). However, additional work was involved to construct our family of examples in Example 36 (case $\mathcal M>0$) in Appendix H.
> I think that claiming that the implicit bias for the problem studied here is to minimize the norm is a bit misleading. As I understand it, all the learned weights either align with the target neuron or deactivate, this means that the solution converges to a specific form of rank-1 matrix (where each row is either $\mathbf{v}^*$ or 0).
In the definition of $\Theta\_{\mathbf{v}^*}$ in Section 6, the normalised non-zero hidden neurons equal $\mathbf{v}^*$. Thus Theorem 7 asserts that, as the initialisation scale decreases to zero, the training converges to a rank-1 matrix in which each row is either a positive scalar multiple of $\mathbf{v}^*$ or $0$. However, the scalars can be quite different, i.e. the hidden neurons in the rank-1 network might have quite different norms (that depend on the initialisation).
We agree that the norm is not necessarily minimised; indeed Theorem 8 (ii) captures the scenario where it is not.
> Is it possible to extend the result to gradient flow? or possibly SGD?
Thank you for this suggestion. We believe an extension to gradient descent would be possible, by proving an upper bound on the learning rate such that essentially the same behaviour occurs as with gradient flow (as in e.g. Cheridito et al. “A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions” J. Complex. 72, 101646, 2022). At the moment we prefer to leave this to future work, so as not to distract readers from our main goal: analysis of the training dynamics and characterisation of the implicit bias of learning a non-trivial regression task using a non-linear network in a theoretically challenging setting.
An intriguing related question is to what extent the phenomena described in this and other works that assume a small initialisation are consistent with training by gradient descent (possibly stochastic) that uses adaptive learning rates to speed up departures from near saddle points.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. I still think there are some issues with the presentation of the paper, most notably I think it would make it much clearer if there is a single theorem statement which provides the total convergence time, rather than several lemmas. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We did not say so explicitly, but yes, we shall be happy to put the total convergence time in a single theorem as you have suggested --- thank you for this idea. More generally, we shall improve the presentation based on the suggestions in all four reviews. | Summary: This paper studies how a two-layer ReLU network can fit a single neuron. The authors consider the case where all
training points are correlated with the teacher neuron and show that gradient flow from small initialization can
converge to a zero-loss solution.
They divide the training into two stages. In the first stage, the neurons are small and will align with the teacher
neuron or deactivate, depending on the sign of the output weight, and in the second stage, the aligned neurons will grow
and fit the target function.
In addition, they show that as the initialization scale goes to $0$, gradient flow converges to a rank-$1$ solution.
Strengths: * Overall, the presentation is clear, and the main text is relatively easy to follow (see the weakness part of the
review for some minor issues.)
* The use of the yardsticks $\omega_j$ is interesting.
* The geometric argument for the second phase seems novel and may be of independent interest.
* Theorem 8, which shows the set of balanced rank-1 interpolating networks and the set of minimum-norm interpolating
networks can be the same or disjoint, depending on a certain quantity, is surprising.
Weaknesses: * The presentation is overall clear, but the notations are cumbersome.
* For example, I don't think using $w$ and $\omega$ simultaneously is a good idea, especially when they co-occur a lot
and represent two closely related objects.
* The detailed definitions of $\delta$ and $S_l$ can be moved into the appendix, and define them informally in the
main text and maybe briefly explain how small can $\delta$ be and the intuition behind $S_l$.
* I personally don't like the $\exp(-n)$ initialization scale, though I will accept it as it has been used in previous
works. I think it is somewhat cheating because, with it, you can make sure the norm of the neurons is sufficiently small
so that you can ignore them for any polynomially long time.
* The setting is quite restricted and unrealistic as it requires the angle of all inputs and the teacher vector to be
smaller than $\pi / 4$.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Is it possible to adapt your strategy to the Gaussian inputs case? This seems to be a more natural generalization to
the orthogonal input setting.
* Could you intuitively explain the meaning of the conditions in the definition of $S_l$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theoretical work and I cannot see any potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > In the first stage, the neurons are small and will align with the teacher neuron or deactivate, [...]
Just to clarify that the alignment in the first phase is with the vector $\mathbf{\gamma}\_{[n]} = \frac{1}{n} \sum_{i = 1}^n y\_i \mathbf{x}\_i$, whose direction is in general different from that of the teacher neuron. (E.g. the teacher neuron might even be outside of the cone spanned by the data points.) This is important because it means that, after the first phase, the aligned neurons may need to change direction significantly to fit the target function. In Example 32 at the end of Appendix F, we show an example run in which the alignment with $\mathbf{\gamma}\_{[n]}$ happens relatively early while the norm is relatively small, whereas alignment with the teacher neuron $\mathbf{v}^*$ happens only towards the final convergence after the growth of the norm.
> The presentation is overall clear, but the notations are cumbersome. [...]
We are happy to implement these suggestions. The intention behind the $\mathbf{w}$ and $\mathbf{\omega}$ notations was that their similar looks would remind the reader that they represent related objects, however thank you for pointing out that it may be confusing.
> I personally don't like the $\exp(-n)$ initialization scale, though I will accept it as it has been used in previous works. [...]
The exponential dependence of the initialisation scale on the dataset cardinality $n$ is indeed impractical when $n$ is not small. However, we think it is worthwhile to analyse in detail even such small initialisation scales and to determine a bound under which the analysis holds, partly to provide a solid basis from which to investigate the border to another regime such as mean field or lazy. It is encouraging that our numerical experiments paint a picture which broadly follows the patterns identified in our theoretical results even for initialisation scales that are significantly larger than our theoretical bound. Relaxing that bound, and in particular seeking to reach polynomial dependence on $n$, is therefore an interesting direction for future work. We expect one challenge will be that the first phase of the training will produce bundles of neurons that are not as tightly aligned as we can guarantee in this work.
> The setting is quite restricted and unrealistic as it requires the angle of all inputs and the teacher vector to be smaller than $\pi / 4$.
Please see our response to all the reviewers jointly.
> Is it possible to adapt your strategy to the Gaussian inputs case? This seems to be a more natural generalization to the orthogonal input setting.
We expect that various parts of our submission could be adapted to the Gaussian inputs case, either for a population loss or for an empirical loss with a sufficiently large number of samples.
Xu and Du COLT 2023 considered learning a ReLU neuron by a ReLU network with Gaussian inputs, where only the hidden layer is trained and every last-layer weight is fixed to $1$. Having a population loss over the Gaussian inputs makes some aspects of the training dynamics simpler than in our setting. In particular, already the first phase aligns the neurons to the teacher.
We think it is worthwhile to consider finite datasets, especially in the context of seeking to determine the implicit bias of gradient-based algorithms for regression tasks, where a central question is what interpolants the training converges to in predictor space. The results in this submission establish that, for any dimension $d > 1$ and in our correlated setting, $d$ linearly independent inputs (the algebraic minimum) suffice to learn the teacher neuron; and that the implicit bias is such that, as the initialisation scale tends to zero, exactly the teacher neuron is converged to and moreover with a network of rank $1$. (In the orthogonal setting, the number of inputs is at most $d$.) We suggest one of the main contributions of this work is a full account of the second phase of the training, which due to working with an empirical loss over a finite dataset (rather than a population loss over a spherically symmetric data distribution) may start with the bundle of neurons aligned to a direction which is far from that of the teacher.
> Could you intuitively explain the meaning of the conditions in the definition of $\mathcal{S}\_\ell$?
The entire set $\mathcal S$ is open, bounded, and connected. It is an invariant for the training dynamics in predictor space even when small noise is added to the derivative: namely, $\mathcal{S}$ contains the trajectory of the bundle $\mathbf{v}$ of neurons from the end of the alignment phase onwards, travelling from near the origin to near the teacher $\mathbf{v}^*$, see e.g. Example 30 (and Figure 3) in Appendix F.
The $\Omega\_k$, $\Phi\_\ell$, and $\Psi$ conditions control the dynamics: the ratios $\nu\_k / \nu\_k^*$ grow towards $1$ and then overshoot $1$, all sequentially one coordinate at a time: each component $\mathcal{S}\_\ell$ basically contains the segment of the trajectory during which the $\ell$th coordinate of the bundle grows towards its target, and the “handover” to $\mathcal{S}\_{\ell + 1}$ (for $\ell < d$) happens exactly at the point of the overshoot. The coordinates, $\nu\_k$ and $\nu\_k^*$, are with respect to the basis consisting of the eigenvectors of $\frac{1}{n} \mathbf{X} \mathbf{X}^\top$.
The non-linearity of the $\Psi\_{k, k’}^\uparrow$ constraint makes the boundary of the set $\mathcal{S}$ repelling for the approximate training dynamics when approached from the inside.
Finally, $\Xi$ defines an ellipsoid of all vectors that have an acute angle with the derivative of the training dynamics in predictor space. Moreover, $\Xi$ adds a “padding” inside the boundary determined by the small quantity $\lambda^{\varepsilon / 3}$ to account for the approximate derivatives.
If you think it would be helpful, we shall add explanations along those lines to the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I will keep my score. | Summary: Convergence and implicit bias of non-linear networks is an important open question in deep learning theory. The paper studies these questions in the case of regression with ReLU networks with a single teacher neuron. It proves that at a vanishing initialization scale the student neurons align with the teacher (or gets deactivated). It also shows an interesting counter example such that the implicit bias as initialization tends to zero need not be a minimum norm interpolator.
Strengths: a) Going beyond the orthogonal data, the paper proposes an interesting setting which helps analyse the case of correlated inputs.
b) The geometric technique to study the convergence after the alignment phase is novel.
c) The scenario proposed where the implicit bias as $\lambda \to 0$ is a rank minimizing one instead of a minimum norm interpolator is a very interesting contribution.
Weaknesses: a) The setting is simplified: there is only a single teacher neuron and all the labels are only positive. It directly that neurons (at least yardstick neurons) with negative last layer decreases in norm. The assumption that the inputs are correlated further ensures that they are deactivated. This makes the analysis easier.
b) The technical analysis follows the same strategy as Boursier et. al. Some aspects are easier as there is only one saddle to escape.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: a) It is mentioned in the contributions there exists a case where that the minimum norm interpolator is rank 2 instead of rank 1. Can the authors comment how the non-linearity plays a role here? Can the authors a simple example providing a more detailed illustration?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: a) The experiments are conducted with synthetic data and only in the case of single teacher neuron. It would interesting some empirical evidence in the case of more than one teacher if the phenomenon characterized in the paper holds in more generality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > b) The technical analysis follows the same strategy as Boursier et. al. Some aspects are easier as there is only one saddle to escape.
We think that the presence of the negative labels and the consequent second saddle in Boursier et al. did not introduce major difficulties in that work, i.e. that the main technical achievements in Boursier et al. lie elsewhere in that work. Indeed, the proofs of the technical results that take care of the negative labels in Boursier et al. (Lemma 8 and Lemma 11) proceed mostly along the same lines as the proofs of the corresponding results about the positive labels (Lemma 7 and Lemma 10, respectively).
We remarked in the second paragraph of Appendix A that our results should be straightforward to extend to a setting with an orthogonally separable dataset labelled by two teacher neurons, which would allow both positive and negative labels, and result in a second saddle. We chose not to develop that case in the paper because it would be a relatively shallow addition from a technical point of view. Please see the bottom plot in the attached PDF for an experimental illustration.
We suggest that, in technical aspects (proofs), our correlated setting is considerably more difficult to analyse compared to the orthogonal setting of Boursier et al. In particular, our handling of the second phase during which the active neurons simultaneously grow and turn involved a novel geometric technique.
> a) It is mentioned in the contributions there exists a case where that the minimum norm interpolator is rank 2 instead of rank 1. Can the authors comment how the non-linearity plays a role here? Can the authors a simple example providing a more detailed illustration?
A family of examples is provided in Example 36 (case $\mathcal{M} > 0$) in Appendix H, for any dimension $d > 2$. The core of the construction are three data points, e.g. $(16, 0, 0)$, $(16, -4, 1)$, $(16, 4, 1)$, and a teacher neuron, e.g. $(0.8, 0, 0.6)$, whose norm is $1$. In this instance, the labels are therefore $12.8$, $13.4$, $13.4$, respectively. However the same labels are produced by a sum of two ReLU neurons $(0.8, 0.05, 0.4)$ and $(0, -0.1, 0)$, whose sum of norms is approximately $0.896 + 0.1 = 0.996 < 1$. In order for the second neuron to contribute to the label of just one data point out of three, non-linearity is essential. Without it the sum of the two neurons would label the third data point by $13$ rather than $13.4$. It is straightforward to obtain from this example a balanced one-hidden layer ReLU network with two neurons which thereby has smaller norm than any interpolator of rank $1$.
If you think it would be helpful, we are happy to include a short example along these lines in the main paper?
> a) The experiments are conducted with synthetic data and only in the case of single teacher neuron. It would interesting some empirical evidence in the case of more than one teacher if the phenomenon characterized in the paper holds in more generality.
Learning a multi-neuron ReLU teacher network is known to be challenging to analyse. Already in a uni-variate setting with biases, it is easy to come up with example datasets of only a few data points for which during the training several bundles of neurons emerge that simultaneously change their norms and directions, and apparently interact significantly with each other. Such an example was provided by Boursier et al. in their Figure 4 in Appendix A.1. We believe that our work provides a substantial and important step towards understanding such phenomena in arbitrary dimension, where we focused on alignment of the neurons and their simultaneous growth and turning, leaving for future work significant interactions between multiple bundles of neurons.
Please see the middle two and the bottom plot in the attached PDF for some results from additional experiments with two teacher neurons. For small scales of the noise used to generate the synthetic datasets (where the smallest scales are such that the angles between the data points and the corresponding teacher concentrate around $\pi / 4$), the phenomena we identified theoretically still seem to hold, where the training passes near a second saddle point as we outlined in the second paragraph of Appendix A. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their positive and encouraging reviews, and for comments and questions, which will help us improve the submission.
We attach a PDF with plots from a few additional experiments, and refer to it in our responses to some of the reviewers.
In what follows we elaborate on the $\pi / 4$ condition on the angles between the data points and the teacher neuron, which reviewers PDpC and ytEj commented on and asked about, and which may be of interest also to the other reviewers.
* Theoretically, whilst the $\pi/4$ condition is strong, it enables us to focus on datasets that are not orthogonal, thus taking on the main challenge posed by Boursier et al. NeurIPS 2022. Notice, e.g., that orthogonal datasets in $\mathbb R^d$ may have at most $d$ data points, whereas in our work the number of data points is unbounded.
* A key place where we rely on the $\pi/4$ condition is to show that, during the second phase of the training, all neurons that form the aligned bundle are active on all (and thus the same set) of data points (see Proposition 26 in Appendix F). The $\pi / 4$ condition also implies that the angle between any two data points is less than $\pi / 2$, which streamlines the first phase and ensures that it produces only one bundle of aligned neurons. We remark that, in dimension $2$ (which covers the uni-variate setting with biases), the $\pi / 4$ restriction with the teacher neuron can in fact be replaced by this weaker $\pi / 2$ restriction between data points.
* Please also see the top four figures in the attached PDF for some results of additional experiments that explore the effects of increasing the scale of the noise used to generate the synthetic datasets so that quickly most of the data points exceed the $\pi / 4$ angle with the corresponding teacher neuron. The growing maximum angles between neurons at the end of the training indicate that we no longer have a single (or one per teacher neuron) aligned bundle of neurons forming and sticking together for the rest of the training.
We believe this submission paves the way for future work to generalise both the strictly orthogonal datasets of Boursier et al. and the correlated datasets with the $\pi / 4$ bound considered here.
We are happy to add remarks such as above to the paper. We shall make clear all the places in the proofs where we use the $\pi / 4$ condition, and we shall report on the additional experiments in the appendix.
Pdf: /pdf/e2c4f4b6a4a7f3ef4b9666eada5c62533944f206.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks | Accept (poster) | Summary: The authors consider the task of learning curve extrapolation, i.e., the aim is to predict the performance of a given model wrt. e.g., accuracy/log-likelihood over time, given current observations.
Their proposal primarily relies on Prior-data fitted networks (Müller et al., 2022) trained on samples from a prior set of curves adapted from their sole comparison method by Dunham et al. (2015).
Compared to this prior MCMC-based method (Dunham et al., 2015), the authors can show greatly improved performance in most setups as well as very strong improvements in runtime.
Strengths: - The paper focuses on an important task in the AutoML literature that has so far not received a lot of attention.
- The authors evaluate their approach extensively on a varied set of experiments properly evaluating all their claims.
- The paper is overall well written with two minor deficiencies (see next section).
- The authors provide an extensive implementation of the model. However, it is also necessary as replicability would be difficult given the coarse level of detail in the written part.
- With respect to originality. The contribution is primarily in the application. The model itself relies on the PFN model by Mueller and the prior is an adaptation of Dunham et al.
This can be seen as a weakness, yet given that the application of PFNs to this field is novel and the application is an important one with clear results, I see this only as a very minor weakness if at all.
- Concerning significance, I lack a deep understanding of the AutoML literature to properly judge the significance. But from what I know of the field, the results look very promising and should be of great interest to many readers.
Weaknesses: - It has two deficiencies in the writing/structure of the paper.
1. Its backbone is a prior-data fitted network. However, that model itself is only briefly introduced in a single paragraph with a figure that is barely understandable without reading the original paper.
The paper requires a proper discussion of this approach in either the main text or the appendix.
2. Training details and hyperparameters are barely discussed with many essentials missing, e.g., the reader has to guess the meaning of `nb_data`, `emsize`, `nlayers`. It is an easy guess, but a guess nevertheless. Adding the abbreviations to the paragraph in l157 would quickly fix this.
- As stated above, the novelty is almost solely in the application which could be considered a weakness. But in my opinion this is only a very minor weakness if at all.
- Table 3 lacks error bars.
## Minor
- Figure 2 is missing details on the colors of the arrows
- The caption of Table two should be above
- Figure 3 contains an unexplained horizontal line
- l319-320 the sentence is broken (interesting appears twice)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In l216 the authors mention that the best MCMC-PP method took over 80 seconds. In Table 3 the 80s method does, however, not not improve upon the 30-second version. Can the authors clarify this statement?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss limitations of their method, but no potential negative societal impact of their work. The second is of lesser importance in this work anyway.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Regarding the weaknesses you raised:
**Its backbone is a prior-data fitted network. However, that model itself is only briefly introduced in a single paragraph with a figure that is barely understandable without reading the original paper. The paper requires a proper discussion of this approach in either the main text or the appendix.**
We agree that our description thereof in the current manuscript is rather high-level. We will add a section elaborating on some PFN-specific implementation details and design choices in the appendix. (We also would like to mention that the code for both PFN and LC-PFN is open source, at least specifying all details in the code. But that’s of course not an excuse for the method to be only explained very briefly in the paper, and we’ll fix that.)
**Training details and hyperparameters are barely discussed with many essentials missing, e.g., the reader has to guess the meaning of ``nb_data``, ``emsize``, ``nlayers``. It is an easy guess, but a guess nevertheless. Adding the abbreviations to the paragraph in l157 would quickly fix this.**
We will modify this paragraph accordingly, thank you for pointing out this issue and your suggestion.
**As stated above, the novelty is almost solely in the application which could be considered a weakness. But in my opinion this is only a very minor weakness if at all.**
Agreed, thank you for this perspective.
**Table 3 lacks error bars.**
Our apologies, please find the updated table including the standard error in the PDF attached to the global rebuttal (as Table 1).
**Minor comments**
We will make the necessary changes, thank you for pointing out these minor issues.
### Regarding your questions / comments:
**In l216 the authors mention that the best MCMC-PP method took over 80 seconds. In Table 3 the 80s method does, however, not not improve upon the 30-second version. Can the authors clarify this statement?**
This is an unfortunate error in the discussion. Our apologies and thank you for spotting it.
It should say “the best MCMC-PP method took over 30 seconds.”
Context: In our preliminary experiment, conducted with a smaller sample size, the mentioned MCMC configuration appeared to show a slight advantage. However, this difference did not hold up in the final, large-scale experiment. Unfortunately, we overlooked updating this information in our discussion, and we apologize for any confusion this may have caused. In the final experiment, in Section 4.1, we found that no single MCMC variant consistently outperformed others across different cutoffs. Given the subtle differences among top-performing configurations and the varying computational efficiency, we chose the more efficient variant for future comparisons. Note that this ‘cheaper’ MCMC variant still requires multiple orders of magnitude time than the cheapest LC-PFN variant (to obtain a worse average LL).
---
Rebuttal Comment 1.1:
Comment: Thank you for your further clarifications. I keep my score of recommending acceptance. | Summary: The authors in this submission applied the prior-data fitted neural networks (PFNs) in the learning curve extrapolation task, for which the main goal is to predicit the performance of a machine learning model in later epochs, based on the information from earlier epochs. The target is modeled as a linear combination of basis growth curves, and the proposed LC-PFN algorithm aims to minimize the cross entropy loss as in PFN. The authors tested the proposed method across a few datasets and showed that PFN (i) achieves better prediction and (ii) is computationally efficient, when compared with its competitors.
Strengths: 1. The idea of applying PFNs in the learning curve extrapolation task sounds reasonable, and is intuitively more efficient when compared with MCMC.
2. The application of LC-PFN in early stopping could be valuable, as it can be helpful for the model selection step. The experiments in Section 4.3 also provide some promising results.
Weaknesses: 1. In my opinion the selection for the 3 parametric basis curves seems a bit ad-hoc. It's not convinving whether they are sufficient to fit different learning curves. Furthermore, selection of hyperparemters in priors there also lacks details.
2. The results could be more convincing if the authors can compare aganist more previous methods, instead of MCMC only. For instance, how about the one by Klein et al., 2017, which used Bayesian neural networks for learning curve prediction?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. In Table 1, is the support for x integers, as it corresponds to $f_k (t | \theta_k)$ in the model?
2. For the equation just below L132, what does $<$ mean?
3. Do you need to assume $t > T^{'}$ in your LC-PFN model during training?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Regarding the weaknesses you raised:
**In my opinion the selection for the 3 parametric basis curves seems a bit ad-hoc. It's not convincing whether they are sufficient to fit different learning curves. Furthermore, selection of hyperparameters in priors there also lacks details.**
Thanks for the question. We would like to add some further context on how we got to this prior: For a fair comparison, we chose this prior to mimic the setting in Domhan et al. (2015), which is the most closely related to the prior art. Sadly, using the exact same prior was not possible (we cannot generate samples from a uniform distribution with unbounded support, see Section 3.2). Hence, we decided to ‘peak’ the prior / limit parameter ranges to cover a wide spectrum of possible curves, while still guaranteeing the competitiveness of our baseline. Despite these efforts, we agree that the resulting prior is far from perfect (see also Section 5, l326). Note that we settled for 3 curves, as using all 11 curves would have only partially fixed this issue, made this ‘peaking the prior’ exercise much more complicated, and in relative comparisons would likely negatively impact our MCMC baseline. In Section 4.2, we also compare to the original implementation using all 11 basis curves, to establish that LC-PFN is competitive on real learning curves, despite using only 3 basis curves.
**The results could be more convincing if the authors can compare against more previous methods, instead of MCMC only. For instance, how about the one by Klein et al., 2017, which used Bayesian neural networks for learning curve prediction?**
To the best of our knowledge, and based on a recent literature survey (Mohr and van Rijn 2022), Domhan et al. (2015) is the only work that previously considered Bayesian LCE requiring ONLY a single partial learning curve as input. LCNet (Klein et al., 2017), see also our discussion in the last paragraph of Section 2 (l94), in addition, requires the training hyperparameter settings as input. In fact, the Bayesian neural network used by Klein et al. (2017) only models the dependency of the prior on these training hyperparameters. This dependency must be meta-learned across multiple training runs. The only way we could compare to LCNet, in the current scope, would have been to remove this dependency (e.g., fix the input of the BNN to 0), but this would essentially reduce the BNN to a (gradient-based) MCMC method.
If you are aware of any other prior art that should be included in our comparison, please let us know, and we would gladly add it.
### Regarding your questions / comments:
**In Table 1, is the support for x integers, as it corresponds to $f_k(t | \theta_k)$ in the model?**
Yes. To avoid confusion, we will update the table to use t instead of x here.
**For the equation just below L132, what does < mean?**
It simply means “less than”, comparing two real values (initial and final model performance), constraining the final performance to be better than the initial one (following Domhan et al., 2015).
**Do you need to assume $t’>T$ in your LC-PFN model during training?**
No, one could easily train an LC-PFN by censoring arbitrary points in the curves (vs right censoring only). The benefit is that the resulting model would also be able to do interpolation at the cost of requiring a slightly larger model/training time. Since our work is about extrapolation, we decided to train LC-PFN for this task specifically. | Summary: The authors propose applying prior-data fitted NNs to learning curve extrapolation.
The authors demonstrate that this approach outperforms MCMC inference and is substantially faster.
Strengths: The paper has a novel idea of applying approximate inference via meta-learning learning curves.
The method appears to work in experimental evaluation, but more experiments would make the paper stronger.
The manuscript (all but experimental section) is clear and relatively easy to follow.
The authors provide the code.
Weaknesses: The paper contains only two experiments. One is based on synthetic data.
Table 1 has no error bars. Table 1 indicates that while the performance of MCMC is stable across the hyperparameters, PFN score is dependent on the network size indicating it might not generalize well to other problems.
While the difference in time is clearly significant, the differences in LL are rather small.
While the idea of applying the described method in the considered context is novel, much of this work is a combination of previous work by Domhan 2015 and Muller 2022.
The experimental section could have been written in a more clear way.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is poor performance of MCMC attributed to slow mixing or initializing at a poor point? What about MCMC inference initialized at MAP?
If MCMC has substantially more computational time will it outperform PFN? At the end of the day exact inference should outperform approximate inference.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Regarding the weaknesses you raised
**The paper contains only two experiments. One is based on synthetic data.**
We evaluated LC-PFN in three different experimental setups (~ Section 4.1, 4.2, and 4.3). In Section 4.1, we evaluate LC-PFN (and MCMC) on samples of the prior. From a learning curve perspective, this is indeed ‘synthetic’ data. However, from a Bayesian perspective, this is arguably the best way of evaluating and comparing the ability of both methods to perform approximate inference without confounding factors. In Sections 4.2 and 4.3 we use real learning curve data from four different benchmark suites, training a wide variety of different architectures (e.g., MLP, CNN, RNN, Transformer) on a variety of different datasets (e.g., tabular, images, text, protein data) using a variety of different hyperparameter settings (e.g., optimizer, learning rate).
**Table 1 has no error bars.**
Table 1 in the paper does not present experimental results. We assume you mean Table 3? Our apologies, please find the updated table including the standard error in the PDF attached to the global rebuttal (as Table 1).
**Table 1 indicates that while the performance of MCMC is stable across the hyperparameters, PFN score is dependent on the network size indicating it might not generalize well to other problems.**
Table 3 only shows three configurations for each method. In total, we evaluated 27 PFN variants and 216 MCMC variants (see Table 2), and the results for all of these are shown in Figure 3 of the paper. Here, you can see that there is also variability amongst the MCMC configurations (red dots). For PFNs, variability is mostly determined by nb_data (how many curves the PFN was trained on) and to a lesser extent its size (``nlayers`` and ``emsize``). In fact, the figure shows that each of the 9 PFNs considered can be trained to outperform any of the MCMC configurations. While larger models do better (as also observed in other studies using transformers, e.g., Kaplan et al., 2020), this is generally viewed positively (e.g., the method can be scaled up with the availability of compute).
**While the difference in time is clearly significant, the differences in LL are rather small.**
Agreed, our main claim is that LC-PFN is faster without loss of quality.
**While the idea of applying the described method in the considered context is novel, much of this work is a combination of previous work by Domhan 2015 and Muller 2022.**
Agreed, our work essentially applies the method of Muller 2022 (PFNs) to the problem of Domhan 2015 (Bayesian LCE). However, given the novelty of the method, the relevance of the problem, and the magnitude of improvement (in terms of time complexity / practicality), we believe this to be a very significant contribution nonetheless; reviewer 3hos also seems to share this perspective. Also, we made several smaller, yet relevant contributions. In particular, we introduce a novel normalization procedure (see Appendix A), allowing both LC-PFN, and Domhan et al.’s (2015) MCMC to be applied to learning curves using a wide variety of different, possibly unbounded, performance metrics (not just maximizing a metric in [0,1], e.g., accuracy), including the popular “minimizing log loss”.
**The experimental section could have been written in a more clear way.**
Thank you for your feedback, we will improve the writing of this section for the final version but any concrete suggestions you have to further improve this section would be much appreciated.
### Regarding your questions (in reverse order):
**If MCMC has substantially more computational time will it outperform PFN? At the end of the day exact inference should outperform approximate inference.**
You are right, at least in theory and in the limit this should be the case (MCMC with finite chains is also approximate inference). However, the trends observed in Figure 3, in Section 4.1, seem to suggest only marginal improvements can be made by further increasing chain length. To further address your concern, we conducted extensive experiments the results of which can be found in the PDF attached to our global response. In summary, we discovered / fixed what we presume to be a bug in the original implementation (see global response for details), and ran experiments considering up to 50x longer chain lengths. The trends we observe (see Figure 1 in the rebuttal PDF), suggest that MCMC could eventually attain or even overtake the best LC-PFN. That being said, the best MCMC has approx. 25.000x longer runtimes than LC-PFN requires, MCMC yet not quite reach its performance, so outperforming it would require impractically long chains.
**Is poor performance of MCMC attributed to slow mixing or initializing at a poor point? What about MCMC inference initialized at MAP?**
This is a very good question. First, a few words on how we initialize the chain (also see Domhan et al., 2015, Section 3.1, page 4). Following Domhan et al. (2015), parameters for each of the basis curves are Least-Squared Estimates (LSE) obtained using the BFGS optimizer (Scipy implementation). Weights are initialized to be 1/K, where K(=3) is the number of basis curves. If this initial point violates the constraints imposed by the prior, a default starting point is used instead. Following your recommendation, we investigate the behavior of MCMC with different initializations including starting from the MAP estimate and the default value of the parameters. Figure 1 in the rebuttal PDF shows that the MAP does not improve over the original LSE implementation of Domhan et al. (2015). The fixed default initialization is worst for short chains, but best for very long chains (>20,000), suggesting that greedy initialization (LSE, MAP) may hurt performance on some curves.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I maintain my score. | Summary: The authors used prior fitted networks to perform learning curve extrapolation. Crucially, they demonstrate that their method vastly outperforms approximate Bayesian inference via MCMC (both in terms of inference time and predictive log likelihood). Moreover, they demonstrate that the proposed approach outperforms heuristics for ending unpromising training runs commonly used in hyper-parameter optimization.
Strengths: This paper prevents a super practical method that not only performs well *but* also runs fast, allowing it to be used in many real-time applications. The practicality of the method is extremely bolstered by the impressive suite of empirical experiments performed. Lastly, the paper was written well and was pretty easy to follow (though I have some comments).
Weaknesses: The biggest weakness to me is that I don't think the authors spent time on the potential difficulty of using MCMC for this problem. Specifically, the prior, and the corresponding posterior, is constrained on some non-standard subset that prevents standard MCMC algorithms to be used. The geometry of this will heavily affect the performance of MCMC algorithms as samples that are proposed outside of the set will always be rejected requiring substantial tuning. That being said, I am a little concerned with the number of samples the chain was run: 4,000 samples seems too low for this problem.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Below I will list some comments
1. In figure 1 there is a black dashed line but I can't find in the text what it represents. Without prior knowledge, I thought that maybe it was the true log posterior predictive distribution, which would make it seem like MCMC is outperforming the proposed method. Also, error bars here would be great.
2. For all results, it would help if it was made clear if higher values are better or the converse.
3. For Fig. 4, errors bar would be very informative, especially given the range of values. I suggest plotting 20% and 80% quantiles.
4. In Fig 5., it isn't clear what the error bars are, i.e. standard error, quantiles, etc.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors did a good job explaining the limitation of the methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Regarding the weaknesses you raised:
**The biggest weakness to me is that I don't think the authors spent time on the potential difficulty of using MCMC for this problem. Specifically, the prior, and the corresponding posterior, is constrained on some non-standard subset that prevents standard MCMC algorithms to be used ….**
We agree that Bayesian LCE is a non-trivial problem for MCMC. In fact, this was an important motivation of LC-PFN. For our MCMC baseline, we relied on previous work (Domhan et al., 2015) to make MCMC work reasonably in this challenging setting. While we were forced to make minor changes to the setup, in particular to the prior, we made considerable efforts to retain the competitiveness of the original implementation. Furthermore, we also included the original MCMC implementation as a baseline in our experiments in Section 4.2. To further address your concern, we conducted extensive experiments the results of which can be found in the PDF attached to our global response. First, it is important to note that our original results in Figure 3 in the paper are averages across 10,000 curves. When comparing performances on individual curves, shown in Figure 2 (left, red dots) in the attached PDF, we observe that for the majority of the curves, the best LC-PFN and MCMC perform similarly (i.e., are near the diagonal), and inferior average performance can be largely attributed to ‘failures’ of MCMC on a small fraction of the curves, the ones that are in the lower-right region of Figure 2 (from the rebuttal PDF). After analyzing these failure cases of MCMC, we noticed behavior inconsistent with what was described by Domhan et al. (2015) and fixed the bug we thereby identified as discussed in the global response. Doing this and running MCMC much longer, as you suggested (more below), reduces these failure cases considerably (see Figure 2 (left to right, red vs green dots)).
**I am a little concerned with the number of samples the chain was run: 4,000 samples seems too low for this problem.**
We understand your concern. It is worth noting that Domhan et al. (2015) only used 2,000 samples, which motivated our choice of 4,000 samples as an upper limit. Also, please note that Domhan’s implementation (which we use) uses an ensemble sampler (EMCEE) with 100 workers, and the number of 4,000 samples is actually for each worker, which results in 400,000 samples overall. The trend we observed in Figure 3 of our paper is that while increasing chain length indeed improves performance, the slow rate of improvement suggests that impractically long chains would be required to attain the performance of even the smallest PFN. However, after fixing the bug mentioned above in Domhan’s original implementation, improvement no longer stagnates, and results running MCMC for up to 100,000 samples per worker (i.e., a total of 10M samples) suggest that the gap in performance will close with even more samples (see Figure 1 in the rebuttal PDF, red line). That being said, MCMC still does not reach the average performance of the best LC-PFN, while taking approximately 25,000 times as long.
### Regarding your questions / comments:
**In figure 1 there is a black dashed line but I can't find in the text what it represents. Without prior knowledge, I thought that maybe it was the true log posterior predictive distribution, which would make it seem like MCMC is outperforming the proposed method. Also, error bars here would be great.**
Unless we are mistaken, there is no black dashed line in Figure 1 of the paper. The red/blue dashed lines in Figure 1 represent the 5 and 95 percentiles of the PPD (so the shaded area ~ 90% CI) as inferred using MCMC/LC-PFN, respectively. We will update the legend of Figure 1 to clarify this. We assume you meant Figure 3 instead. The black dashed line in Figure 3 represents the highest log score attained by any MCMC variant (higher is better). Thus, this does not imply that MCMC is outperforming the proposed method but the opposite. We will update the legend / caption of Figure 3 to clarify this.
**For all results, it would help if it was made clear if higher values are better or the converse.**
Definitely, thank you for the suggestion, FYI:
- Figure 3 / Table 3:
- Higher is better for log score (average LL)
- Lower is better for average runtime
- Figure 4: Lower is better (average rank)
- Figure 5: Lower is better (average regret)
We will clarify this in the captions.
**For Fig. 4, errors bar would be very informative, especially given the range of values. I suggest plotting 20% and 80% quantiles.**
Agreed. However, as Figure 4 shows averages of ranks amongst 3 methods, we feel quantiles (which will be one of the integers 1, 2, or 3) would not be very informative. We propose to use +-1 standard error (SE) instead.
**In Fig 5., it isn't clear what the error bars are, i.e. standard error, quantiles, etc.**
The shaded area corresponds to +- 1 standard error. We will update the caption to clarify this, thanks! | Rebuttal 1:
Rebuttal: In this global response, we would like to thank all reviewers for their constructive feedback. We are glad that our work was generally well-received and we address specific concerns / questions raised by the reviewers in our individual responses. Multiple reviewers expressed some concerns about our MCMC baseline. To support our response, we conducted extensive experiments that we discuss in the following; accompanying figures are found in the attached PDF.
**Improving baseline implementation:** We analyzed cases where MCMC performed poorly in order to determine any potential causes of its non-convergence to the optimum. In these analyses, we noticed behavior inconsistent with what was described by Domhan et al. (2015). Specifically, the hard constraints (monotonicity and range) were checked for every basis curve ($f_k$), rather than on the combination ($f_{comb}$), as described in their paper (see Domhan et al., 2015, Equation 5). While we assume this was intended as an optimization in their code, it is not equivalent (i.e., a combination can satisfy these constraints, while individual curves do not) and this is exactly what was happening for the curves MCMC performed worst on. After fixing this (which we would call a bug) in their original implementation, we re-ran the fixed version of the controlled experiment (Section 3). This yielded results much more in line with what one would expect from MCMC for higher sample sizes: while it takes a long time, it finds solutions as good as those by LC-PFN (see Figure 1 in the rebuttal PDF). It is crucial to emphasize that despite these improved results, our main claim about LC-PFN obtaining many orders of speedup without loss of quality w.r.t. MCMC remains valid.
**Ablating MCMC w.r.t the initialization and the number of samples:** We ran experiments to assess the effect of the initialization strategies, including the original least-square estimate (LSE), the Maximum a posteriori (MAP) estimate, and a fixed default value of the parameters. Furthermore, we ran experiments on considerably larger sample sizes (up to 100,000 samples). As shown in Figure 1 in the attached PDF, LC-PFN still consistently outperforms MCMC in this ablation.
Pdf: /pdf/2e95fbbf8fde30452b78415a2fc634c99d80a947.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Physics-Informed Bayesian Optimization of Variational Quantum Circuits | Accept (poster) | Summary: The paper introduces a new approach for BO of VQEs. BO can be a good match for this problem since it models the noise (measurement and circuit level) of quantum circuits. The main idea of the paper is to use a kernel that is adapted to the form of the VQE objective assumed when variational parameters are associated to single qubit gates. The paper also introduces EMICoRE and NFT-with-EMICoRE, acquisition function and parameter update strategies. Overall, the idea of using an adapted kernel for this task is well motivated, the paper is well written and presents numerical evaluation of their method. I think however that in the current state, the experimental evaluation is too limited to be convincing of the promise of this method with respect to other optimisation algorithms for VQEs.
Strengths: The paper addresses an important problem in quantum computing. It is well written and accessible for both quantum computing and ML communities I think. It introduces the following technical innovations:
- Kernel adapted to VQEs: the kernel decomposition based on the form of the cost function of VQEs is a sound contribution.
- NFT-with-EMICoRE: a new BO algorithm that builds on NFT and a new acquisition function.
Weaknesses: 1. Specific ansatz: the paper deals only with parametrised single qubit gates. While I believe this is not a theoretical limitation - as one can always decompose any unitary as a product of single qubit gates and CNOTs - it is not clear to me how practical is this parametrisation. The authors could add an explanation of whether this is a limitation for practitioner and what steps one needs to do to apply their algorithm to a generic parametrised quantum circuit
2. Limited experimental evaluation: the paper only discusses Q=3 and 5 qubit systems, while classically one could simulate easily bigger systems. Also the paper does not compare against other methods in the literature, it only compare against NFT. More benchmarking for larger systems and against other techniques would be required to assess the promise of the method.
3. Unclear benefit of the kernel: looking at figure 2, it is unclear that the red curve is better than blue curve - in fact on the left figure it looks to me that RBF is doing better.
4. Missing ablation for the EMICoRE acquisition function. One contribution of the paper is to introduce this new acquisition function, but ablations showing its importance are missing.
Minor:
- $\mu_X$ depends on y as well in section 2.1
- "the the" in line 308
- unclear what conjugacy means in line 97
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: 1. Could you benchmark your method against other methods for VQE optimisation than NFT? And for larger systems?
2. Why is the ground state energy not reached in figure 3 say? I would expect that it is possible to find a VQE ansatz that can solve this easy problem.
3. Can you add ablations for the EMICoRE acquisition function? Have you tried different heuristics for updating $\kappa$?
4. Can you explain how your method can be used to optimise general parametrised circuits beyond those that have single qubit gates?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: - limited experiments and benchmarking
- limited explanation of their choice of parametrised circuits
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for highlighting the strengths of our work. We hereby address the concerns raised in the review:
## **Weaknesses**
> 1. Specific ansatz: ...
The set of single-qubit rotation gates and entangling CNOT gates are indeed universal, and allow for synthesizing all unitary operations on a subset of qubits in an efficient manner, see [1]. More generally, this is the essence of the Solovay-Kitaev theorem [2]. Moreover, most commercially available superconducting quantum devices offer single-qubit rotation gates and CNOT entangling gates, such that our ansatz does not pose a restriction in practice. In fact, Qiskit, one of the most prevalent SDKs for developing quantum programs, offers the ansatz structure we use as *"a heuristic pattern that can be used to prepare trial wave functions for variational quantum algorithms or classification circuit for machine learning"* (see [qiskit.org/documentation](https://qiskit.org/documentation/stubs/qiskit.circuit.library.EfficientSU2.html)). Furthermore, our work is not limited to the type of ansätze we studied in the manuscript. The only restriction is that the parametric gates in the ansatz are of the form $R(\theta) = \exp(-i\theta G)$, where $G$ is an arbitrary gate operation that fulfills $G^2 = \mathbf{1}$, see Eq. (2) in the NFT paper [3]. This comprises any gate where $G$ is a tensor product of Pauli operators, for example parametric entangling gates of the form $\exp(-i\theta Z_l\otimes Z_k)$ with $Z_j$ the Pauli $Z$-matrix acting on qubit $j$.
> 2. Limited experimental evaluation: ...
In the paper's Appendix G, we show results for Q=7 in Figs. 13-15. The method we compare to, NFT [3], is widely accepted as the current state-of-the-art. Specifically, in [3], the authors showed superior performance compared to many gradient-based and gradient-free methods, i.e., SPSA[4], BFGS[5], N-M[6], Powell[7], CG[8]. We, therefore, refrained from including these inferior baselines to streamline the presentation. Our numerical experiments clearly establish that EMICoRe gains favorable scaling with the system size, i.e., our method can beat the state-of-the-art method by an increasing margin as the number of qubits is increased, see Figs. 7-14 in Appendix G.
> 3. Unclear benefit of the kernel: ...
In Table R3 below, we show that our VQE kernel reaches, on average, lower energy and higher fidelity, with smaller variance, when compared to RBF and periodic kernels. Furthermore, the VQE kernel is a necessary ingredient in order to perform the EMICoRe strategy which is proven to outperform the state-of-the-art baseline in our experiments.
> 4. Missing ablation ...
In Fig. 2 in the main paper, we evaluated the performance gain of our VQE kernel without EMICoRe.
To directly show the ablation we conducted additional experiments with $Q=5$ and $L=3$ in the rebuttal PDF. We showcase the benefits of EMICoRe by comparing it to the widely adopted Expected Improvement (EI) acquisition function for two types of systems, i.e., the Ising model at and away from criticality.
## **Questions**
> 1. Benchmark, larger systems
It is well-known that NFT is the current state-of-the-art algorthm for optimizing VQEs. We therefore refrained from including other inferior baselines.
Furthermore, as mentioned above, our method can beat the state-of-the-art method by an increasing margin as the system size increases. We refer to our reply in the **Weakness** section above.
> 2. Ground state energy in figure 3
In Figure 3 from the main paper, our main goal was to demonstrate the faster optimization of EMICoRE compared to NFT at a fixed quantum measurement budget. In Fig. R2 in the attached PDF, we demonstrate that our algorithm can reach the ground state for a larger measurement budget. We will include these experiments in the revised manuscript.
> 3. Ablations
We performed additional experiments in the direction of an ablation study for EMICoRe. This includes comparing to other acquisition functions, e.g., EI, as shown in Fig. R1 in the rebuttal PDF, and performing an in-depth analysis for setting the CoRe threshold $\kappa$, shown in the Table R1 from the general rebuttal above.
> 4. General parametrised circuits
As we pointed out in the **Weakness** section, our method is not limited to the parametric gates we use in our ansatz nor is it limited to single-qubit parametric gates. In general, our approach can be applied to any parametric gate of the form $R(\theta) = \exp(-i\theta G)$ with $G$ a gate that has the property $G^2=\mathbf{1}$, see Eq. (2) in the NFT paper [3]. This includes a wide range of gates, in particular, $G$ can be an arbitrary Pauli string $\otimes_{q=1}^QP_q$, where $P_q$ is one of the Pauli matrices {$\mathbf{1}, X, Y, Z$} acting on qubit $q$. More specifically, this set also includes the entangling gates $R_{XX}=\exp(-i\theta X_l\otimes X_k )$ and $R_{ZZ}=\exp(-i\theta Z_l\otimes Z_k )$, which are more commonly realized in trapped ion setups.
### Table R3: Energy and Fidelity for 600 observed points and different kernels, as shown in Fig. 2. Best results highlighted in bold. For energy lower is better. For fidelity, higher is better.
| Kernel | Energy | Fidelity |
|-|-|-|
| **vqe (ours)** | **-3.306890 ± 0.117974** | **0.926119 ± 0.047960** |
| rbf | -3.269373 ± 0.145235 | 0.895492 ± 0.084795 |
| periodic | -3.232246 ± 0.282348 | 0.901823 ± 0.095861 |
## **References**
- [1] A. Barenco A. et al., Phys. Rev. A 52, 3457 (1995).
- [2] A. Y. Kitaev, Russ. Math. Surv. 52 _1191_ (1997).
- [3] K. Nakanishi et al., Phys. Rev. Research 2, 043158 (2020).
- [4] J. C. Spall, IEEE Transactions on Automatic Control, 37(3):332–341, 1992.
- [5] D. C. Liu, J. Nocedal, Mathematical Programming volume 45, pages 503–528 (1989).
- [6] J. A. Nelder and R. Mead, The Computer Journal 7, 308 (1965).
- [7] M. J. D. Powell, The Computer Journal, Volume 7, Issue 2, 1964, Pages 155–162 (1964).
- [8] R. Fletcher and C. M. Reeves, Computer Journal 7 (1964).
---
Rebuttal Comment 1.1:
Comment: Thank you, the rebuttal addressed my concerns and I increased the score. | Summary: In this work, the authors integrate a quantum kernel method with the EMICoRe architecture to further improve the NFT framework of Bayesian Optimization. The simulation results show the advantages of the proposed approach.
Strengths: (1) The method of leveraging the quantum method for Bayesian Optimization is interesting.
(2) The investigation of how to incorporate the quantum kernel method is significant.
Weaknesses: (1) Since no parametric circuits are implemented in the quantum kernel model shown in Eq. (9), the proposed VQE-kernel is nothing but a quantum kernel learning method, which has been comprehensively studied in previous work in Refs. [1-3]. In particular, Ref. [4] has exhibited the use of quantum kernel learning for improving the performance of Bayesian optimization. --- resolved
(2) The circuit diagram for the quantum kernel learning is not provided such that the experiments cannot be easily reproduced. --- resolved
(3) There are two main contributions to this work: the use of quantum kernel learning and an introduction to EMICoRe. Although the simulation results demonstrate the performance improvement, it is still unknown where the performance gains come from, and the quantum advantages of the quantum kernel are not analyzed at all. --- resolved
[1] Havlíček, Vojtěch, et al. "Supervised learning with quantum-enhanced feature spaces." Nature 567.7747 (2019): 209-212
[2] Wang, Xinbiao, et al. "Towards understanding the power of quantum kernels in the NISQ era." Quantum 5 (2021): 531
[3] Blank, Carsten, et al. "Quantum classifier with tailored quantum kernel." npj Quantum Information 6.1 (2020): 41
[4] Rath, Yannic, Aldo Glielmo, and George H. Booth. "A Bayesian inference framework for compression and prediction of quantum states." The Journal of Chemical Physics 153.12 (2020)
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (1) What are the quantum advantages of the quantum kernel method? Since the quantum kernel method is combined with another proposed method `EMICoRe', we do not know which part contributes to the improvement of the Bayesian Optimization method. --- resolved
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: It would be better to reformulate the quantum approach in the authors' proposed Bayesian Optimization framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. However, it seems as if the referee misunderstood the paper, as our work cannot be characterized as quantum kernel learning. In essence, quantum kernel leverages a quantum circuit to calculate a certain kernel, i.e., the calculation of the kernel is non-classical. In our work, we use classical kernels. Specifically, we propose a novel *classical* kernel which is specifically tailored to the VQE setup. More precisely, the corresponding feature vectors precisely correspond to the energy landscape modeled by the VQE. This is an important and novel contribution in its own right. We go beyond this insight and deploy this kernel in Bayesian optimization. Specifically, we propose EMICoRe and demonstrate that it leads to superior performance when compared to the current state-of-the-art method for optimizing VQE, i.e. NFT. In the following, we reply in more detail to the reviewer's comments:
## **Weaknesses**
> Since no parametric circuits are implemented in the quantum kernel model shown in Eq. (9), the proposed VQE-kernel is nothing but a quantum kernel learning method, which has been comprehensively studied in previous work in Refs. [1-3]. In particular, Ref. [4] has exhibited the use of quantum kernel learning for improving the performance of Bayesian optimization.
As discussed above, our work cannot be characterized as quantum kernel learning, which is the topic of the suggested references. We have added a discussion of the notable differences with respect to quantum kernel learning to the related work section in the updated version of the manuscript.
> The circuit diagram for the quantum kernel learning is not provided such that the experiments cannot be easily reproduced.
As we do not use a quantum kernel, we cannot provide a circuit diagram for it. In more detail, for quantum kernel learning, one would need two circuits, see Figure 1 of Ref. [2] that the referee has mentioned. In our case, we have a classical kernel. Therefore, we only require one circuit: the Efficient SU(2) circuit, as shown in Figure 6 in the Appendix of our paper for reproducibility. We also would like to emphasize that we provided the code for reproducing all our experiments, including the implementation of the quantum circuit.
> There are two main contributions to this work: the use of quantum kernel learning and an introduction to EMICoRe. Although the simulation results demonstrate the performance improvement, it is still unknown where the performance gains come from, and the quantum advantages of the quantum kernel are not analyzed at all.
As discussed above, our work does not study quantum kernels, thus, no possible quantum advantage from a quantum kernel can be analyzed. Regarding the quantum advantage of VQE and the performance gain, please refer to the following answer.
## **Questions**
> What are the quantum advantages of the quantum kernel method? Since the quantum kernel method is combined with another proposed method `EMICoRe`, we do not know which part contributes to the improvement of the Bayesian Optimization method.
As we pointed out in the **Weakness** section, our work does not consider quantum kernels, so no quantum advantage of a quantum kernel can be studied. Instead, our work proposes a *classical* kernel that is uniquely suited for the VQE setup. Specifically, the corresponding feature vectors precisely align with the energy landscape modeled by the VQE. In particular, it retains the property of VQEs that the energy can be determined along an entire line by only fixing three points. This powerful inductive bias allows us to propose EMICoRe, which can outperform the current state-of-the-art method [1]. For the VQE algorithm, it is very well known that the quantum advantage comes from the efficient implementation of the exponentially large Hilbert space using the qubits of a quantum computer. A quantum computer only needs $Q$ qubits to prepare the quantum state $|\psi(\theta)\rangle$ in the VQE algorithm, while a classical computer would require exponentially large resources, $2^Q$, for the same operation.
## **References**
- [1] [Nakanishi K. et al., Phys. Rev. Research 2, 043158 (2020).](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.043158)
- [2] [Wang, Xinbiao, et al. "Towards understanding the power of quantum kernels in the NISQ era." Quantum 5 (2021): 531](https://quantum-journal.org/papers/q-2021-08-30-531/)
---
Rebuttal Comment 1.1:
Title: Follow-up for the rebuttal letter
Comment: Thank the authors for providing the rebuttal letter, which helps the reviewer to better understand the paper. Since the authors have resolved all of my major concerns, I have increased my suggested score for the paper.
---
Reply to Comment 1.1.1:
Title: Strengths after misunderstanding has been resolved?
Comment: Dear Reviewer 1i6G,
Thank you very much for responding to our rebuttal and updating your review. We are happy to know from your updated review that all the weaknesses and the questions in the original review have been resolved. We also appreciate that the underlying misunderstanding, i.e., misclassification of our work as quantum kernel learning, has also been resolved.
However, we wonder why you recommend borderline accept, which only means that "reasons for accept outweigh reasons to reject", although no weakness remains unsolved. Would you mean that the strengths of our paper are insignificant? This point is unclear to us because the strengths in your review are unchanged from the original review, which are based on a fundamental misunderstanding of our paper, and therefore did not evaluate our contributions (our paper is NOT about (1) quantum method for Bayesian optimization NOR about (2) quantum kernel method BUT about Bayesian optimization for hybrid quantum-classical computing with a novel classical kernel and a novel acquisition function).
We would appreciate it if you would update the strengths, and clarify why the strengths are not sufficient for acceptance even after all weaknesses have been resolved.
Sincerely,
Authors | Summary: The authors propose a method for Bayesian Optimization for Variational Quantum Eigensolvers, which they call NFT with EMICoRe. This method uses a novel VQE kernel, which constrains the function space of the Gaussian Process underlying the BO to include only valid VQE objective functions (using the representation in Prop. 2, derived from NFT). The authors also propose a novel acquisition function for the EMICoRe method (Eq. 11), which optimizes over the expected maximum improvement over confident regions. In their experimentation, the authors show that their VQE kernel is able to outperform other kernels in a BO setting, and that their NFT-EMICoRe approach is able to outperform other (non-BO) NFT approaches.
Strengths: The paper tackles an important problem, the optimization of noisy VQE circuits, and offers a principled solution using BO combined with physical constraints, using state of the art methods (NFT). The paper is well written, and the experimentation is well chosen to support the method.
Weaknesses: The experimentation could be expanded. Particularly, it would be interesting to see how the model performs on an actual quantum implementation. Also, investigation of a broader ranger of Hamiltonians would be desirable (including ones motivated by practical problems).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The threshold parameter \kappa is introduced in Sec. 3.2, but doesn't appear to be explored in the experimentation. In particular, did the authors verify that an intermediate value of this parameter is advantageous (hence supporting the use of the EMICoRe acquisition function)? I may have missed this in the experimentation.
Further, on lines 133-134, they state that they are not concerned with circuit noise on current NISQ devices, but line 320 states that the experimentation confirms the suitability of their method for such devices - is such noise explicitly included in the simulations?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the referee for their insightful comments and positive feedback on our work. We take this opportunity to address and respond to the comments below:
## **Weaknesses**
> The experimentation could be expanded. Particularly, it would be interesting to see how the model performs on an actual quantum implementation. Also, investigation of a broader range of Hamiltonians would be desirable (including ones motivated by practical problems).
We agree that our method is suited for a broader range of applications as well as Hamiltonians. The Ising and Heisenberg Hamiltonians, however, represent standard benchmarks that are of high practical relevance. They correspond to spin chain Hamiltonians that are widely studied in condensed matter physics, see e.g. Refs. [1,2]. Furthermore, many lattice field theories can be represented as generalized spin chains (see, e.g., Eq. (4)-(6) in [3]). As far as the implementation on actual quantum devices is concerned, the standard benchmarking is usually performed on simulated, noiseless devices which only account for shot noise. We plan to investigate hardware noise in future work. For more details please see the fourth bullet point in the general rebuttal.
## **Questions**
> The threshold parameter $\kappa$ is introduced in Sec. 3.2, but doesn't appear to be explored in the experimentation. In particular, did the authors verify that an intermediate value of this parameter is advantageous (hence supporting the use of the EMICoRe acquisition function)? I may have missed this in the experimentation.
Thank you for raising this crucial question. We performed an additional investigation for different heuristic for setting the parameter $\kappa$. This study can be found in the general rebuttal, see Table R1, and will be added to the appendix of the paper in its updated version.
> Further, on lines 133-134, they state that they are not concerned with circuit noise on current NISQ devices, but line 320 states that the experimentation confirms the suitability of their method for such devices - is such noise explicitly included in the simulations?
In our experiments, we only consider shot noise throughout the paper. We apologize for any confusion our phrasing may have caused. We will revise the main text to make this clearer.
## **References**
- [1] [Funcke L. et al., arXiv preprint arXiv:2302.00467 (2023)](https://arxiv.org/pdf/2302.00467.pdf)
- [2] [Di Meglio A. et al, arXiv preprint arXiv:2307.03236 (2023).](https://arxiv.org/pdf/2307.03236.pdf)
- [3] [Atas Y. et al, Nature communications 12, no. 1:6499 (2021).](https://arxiv.org/pdf/2102.08920.pdf)
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications, and I agree that the new results on tuning $\kappa$ are a valuable addition to the paper. | Summary: I have reviewed the rebuttal, and I intend to maintain my decision. I appreciate the authors made the efforts to clarify most of my concerns. However, I believe it is essential to address the issue of hardware noise in the present study, a point that previous research seems to have overlooked. I strongly encourage the authors to delve deeper into this matter, as a robust algorithm should be capable of practical implementation on real quantum computers. I believe the issue of hardware noise is actually an inescapable and important challenge here.
This manuscript presents an innovative approach known as EMICoRe, which leverages the synergy between Bayesian optimization and prior knowledge of variational quantum eigensolvers (VQE) to enhance the efficiency of the optimization. The authors introduce a novel kernel specifically designed for Bayesian optimization and employ it within the EMICoRe framework. Additionally, the paper conducts numerical experiments to compare the proposed method with the state-of-the-art baselines, demonstrating its superior performance.
Strengths: - The manuscript introduces a novel optimization algorithm based on Bayesian optimization that incorporates the specific property of the VQE objective function into its design.
- The paper is well-organized and clearly written. The figures in the paper effectively illustrate the improved performance achieved by the proposed method in comparison to the baselines.
- The paper establishes the equivalence between two previously proposed VQE properties, providing a foundational basis for the subsequent analysis on VQE objective functions.
Weaknesses: - The numerical results presented in the paper indicate that the proposed method does not exhibit a clear advantage when the number of observed points is limited, as compared to the baselines.
- Although the proposed method exhibits improved performance compared to the baseline as the number of observed points increases, it still falls short of achieving the ideal ground state.
- The proposed method just considers the shot noise but ignores possible noise originating from the quantum circuit itself.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Based on the numerical results presented in the paper, the fidelity between the ground state and the state obtained through the VQE remains consistently below 0.8, even after reaching 400 observed points. It is noteworthy that the VQE circuit being optimized in the study comprises only three layers and five qubits. Therefore, I believe it is crucial to investigate the potential impact of further increasing the number of observed points and determine the minimum number of observed points required to achieve a higher fidelity, such as surpassing 0.95.
2. The manuscript just includes two parameter settings, one for the Ising model and the other for the Heisenberg model. However, to verify the robustness of the proposed method, I suggest that the authors incorporate more scenarios for validation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I believe the authors have provided thorough discussions on the limitations of their work within the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and appreciation of our manuscript.
## **Weaknesses**
> The numerical results presented in the paper indicate that the proposed method does not exhibit a clear advantage when the number of observed points is limited, as compared to the baselines.
Indeed, when the number of observed points is small the GP surrogate model is not informative enough to provide an effective acquisition function. When the number of observed points gets larger, and thus the GP surrogate model becomes more informative, our EMICoRe approach starts outperforming standard baselines, i.e., our informed search for new points to observe, based on the CoRe, shows its benefit.
> Although the proposed method exhibits improved performance compared to the baseline as the number of observed points increases, it still falls short of achieving the ideal ground state.
We would like to point out that the primary objective of our work is to demonstrate that EMICoRe can improve the efficiency of the training process, and not to extract the physics of the Ising model, which is well known in the literature. Thus, we compare the performance of the training process for different methods under a fixed budget of observations. Our method reaches the ground state by simply performing longer optimizations. To demonstrate this, we ran additional experiments with a larger number of observations, see Fig. R2 in the PDF. From this plot, it is evident that taking more observations will make our EMICoRe converge to the ground state as expected.
> The proposed method just considers the shot noise but ignores possible noise originating from the quantum circuit itself.
When benchmarking the performance of newly proposed hybrid quantum-classical algorithms, the noise from quantum hardware is usually not included in first-case studies (see e.g., [1,2,3]). A standard procedure in the field is to benchmark first on shot noise only, see the NFT [1] experiments, and consider hardware noise in potential follow-up studies. As stated in the general rebuttal, the main reason is that quantum noise is strongly hardware dependent, e.g., superconducting quantum hardware is affected by fundamentally different types of noise (in particular, CNOT gate noise, decoherence, and measurement noise) compared to, e.g., trapped-ion quantum hardware. Moreover, various error mitigation schemes exist for different types of quantum hardware, which generally require additional error calibration runs on the hardware. An informative study of quantum hardware noise would need to take all these considerations into account and therefore is beyond the scope of the initial proposal of our novel method.
## **Questions**
> Based on the numerical results presented in the paper, the fidelity between the ground state and the state obtained through the VQE remains consistently below 0.8, even after reaching 400 observed points. It is noteworthy that the VQE circuit being optimized in the study comprises only three layers and five qubits. Therefore, I believe it is crucial to investigate the potential impact of further increasing the number of observed points and determine the minimum number of observed points required to achieve a higher fidelity, such as surpassing 0.95.
We agree with the referee and we have performed (longer) experiments, see Fig. R2, in the attached PDF. In this analysis, we show that running the optimization longer, i.e., performing more observations, lets the energy converge to the ground state and the fidelity to approach $0.976$. Looking at the figure, we also note that EMICoRe requires 1000 observations in order to push the average fidelity above $0.95$, while NFT never reaches that value. In Table R2 below, we report the mean and standard deviation for the energy and the fidelity for the EMICoRe and NFT baselines when running for 6000 observations, associated to the experiments mentioned above.
> The manuscript just includes two parameter settings, one for the Ising model and the other for the Heisenberg model. However, to verify the robustness of the proposed method, I suggest that the authors incorporate more scenarios for validation.
Our experiments focus on the Heisenberg model with the Ising model as a special subclass of the Heisenberg one. In both cases, the parameters are chosen such that in the thermodynamic limit, they correspond to critical points, i.e., where a quantum phase transition occurs. This choice of setup is in general among the most challenging scenarios, as the corresponding ground states show the largest amount of entanglement. We thus expect any other choice of parameters to render the problem easier. To complement our results, we added another choice of couplings in the PDF document, see Fig. R1. We plan to further extend our study to other Hamiltonians, e.g., quantum chemistry systems, in future work.
### Table R2: Energy and fidelity for longer runs (6000 observations) for the Ising model at criticality. Best results highlighted in bold. For energy, lower is better. For the fidelity, higher is better.
| kernel | Energy | Fidelity |
|--------------------|----------|----------|
| **EMICoRe (ours)** | **-5.973762 +- 0.045612** | **0.975552 +- 0.039288** |
| NFT-random | -5.929573 +- 0.075909 | 0.918256 +- 0.097190|
| NFT-sequential | -5.930662 +- 0.097275 | 0.916873 +- 0.155590 |
## **References**
- [1] [Nakanishi K. et al., Phys. Rev. Research 2, 043158 (2020).](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.043158)
- [2] [Farhi E. et al., arXiv:1411.4028 (2014)](https://arxiv.org/abs/1411.4028)
- [3] [Bravo-Prieto C. et al., Quantum 4, 272 (2020).](https://quantum-journal.org/papers/q-2020-05-28-272/) | Rebuttal 1:
Rebuttal: We thank the four reviewers for their valuable feedback. To streamline our reply, we place the referenced tables at the bottom.
- Some reviewers suggested additional experiments for other parameter choices of the target Hamiltonians. We stress that the Ising Hamiltonian, which is studied along with the Heisenberg Hamiltonian, is considered at criticality (in our paper) and thus represents a challenging optimization objective. Studying the Hamiltonians for coupling values off criticality can be expected to be less challenging. To confirm this, we have performed additional experiments for different coupling parameters. These are shown in Fig. R1 in the attached PDF.
- Furthermore, some reviewers suggested to explore further possibilities of heuristics to set the CoRe threshold $\kappa$. As a result, we investigated this as part of additional experiments. Specifically, we used the following updating rule for tuning the value of $\kappa$ at each step:
$$
\kappa = \max\left(C_0\cdot\sigma, C_1\cdot\frac{ \hat{\mu}^{t - T_{\mathrm{Ave}}} - \hat{\mu}^{t}}{T_{\mathrm{Ave}}}\right)
$$with hyperparameters $C_0$ and $C_1$. We note that $\sigma$ is the standard deviation of the observation noise, and fixing $C_0=0, C_1=1.0$ reduces to the heuristics used in the original submission.
We report the results for different choices of the hyperparameters $C_0$ and $C_1$ in Table R1 below. The best combination is highlighted in bold. The experiments were performed for the Ising model at criticality using EMICoRe and a $(L = 3)$-layered $(Q = 5)$-qubits quantum circuit with $N_\textrm{{shots}}=1024$. Each trial ran for 600 observations. Table R1 demonstrates that intermediate values for the hyperparameters $C_0, C_1$ achieve the best result, thus proving the effectiveness of EMICoRe. Also, we found that the combination $C_0=0.1, C_1=10$ improves the result from the original submission. We thank the reviewers for the suggestion, and we will update the results in the paper with the new heuristics.
- Some reviewers also asked why the ground state is never reached by our method as well as any baseline. This is merely a consequence of taking a limited number of observations, and we performed longer optimizations to confirm this. Specifically, Fig. R2 in the attached PDF demonstrates that running the experiments longer, i.e., observing more points, makes the optimization converge closer to the ground state and raises the fidelity to $0.976$, see table R2 in the reply to referee 93Tb for details. We explicitly tested that the same holds true for every other choice of parameters and circuit setups investigated in this manuscript, e.g., $(Q,L)=(3,3), (5,3), (7,5)$. For each of those setups, we achieved similarly high fidelities. We will add these experiments to the revised manuscript.
- Another question raised by some referees relates to the type of noise investigated in our work. When benchmarking the performance of newly proposed hybrid quantum-classical algorithms, the noise from quantum hardware is usually not included in first-case studies. A standard procedure in the field is to benchmark first on shot noise only and consider hardware noise in potential follow-up studies, see, for example, the NFT paper [1] and the other Refs. [2,3] below. The main reason is that quantum noise is strongly hardware dependent, e.g., superconducting quantum hardware is affected by fundamentally different types of noise (in particular CNOT gate noise, decoherence, and measurement noise) compared to, e.g., trapped-ion quantum hardware. Moreover, various error mitigation schemes exist for different types of quantum hardware, which generally require additional error calibration runs on the hardware. An informative study of quantum hardware noise would need to take all these considerations into account and therefore is beyond the scope of the initial proposal of our novel method.
Due to some potential misunderstandings by a reviewer, we would like to kindly reiterate the main contributions of our work as follows:
- We propose a novel classical kernel, the VQE kernel, which is uniquely suited for the VQE setup. Specifically, its corresponding feature vectors are the basis of the energy functions that can be modeled by the parameterized quantum circuit.
- Leveraging this powerful inductive bias allows us to propose a novel Bayesian optimization method, EMICoRe, which harnesses the fact that for the parameterized quantum circuit, only three observations allow us to determine the energy landscape along an entire line.
- We demonstrate in our numerical experiments that our proposed method can outperform the current state-of-the-art method NFT and standard BO schemes.
----------
### Table R1: Results for a new heuristic of setting $\kappa$ using the hyperparameters $C_0$ and $C_1$. Best results highlighted in bold. For energy, lower is better. For fidelity, higher is better.
| Description | $C_0$ | $C_1$ | Energy | Fidelity |
|-----------------|-------------------|---------------------|--------|----------|
| In Manuscript | 0.0 | 1.0 | -5.817205 ± 0.140868 | 0.857839 ± 0.159668 |
| Extreme (small) | 0.1 | 0.1 | -5.816213 ± 0.113959 | 0.857070 ± 0.163497 |
| High (large) | 10.0 | 10.0 | -5.719012 ± 0.152881 | 0.824516 ± 0.156895 |
| Extreme (large) | 10.0 | 100.0 | -5.703273 ± 0.156716 | 0.804686 ± 0.179101 |
| **Best** | **0.1** | **10.0** | **-5.842853 ± 0.089015** | **0.869609 ± 0.112713**|
## References
- [1] [Nakanishi K. et al., Phys. Rev. Research 2, 043158 (2020).](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.043158)
- [2] [E. Farhi et al., arXiv:1411.4028 (2014)](https://arxiv.org/abs/1411.4028)
- [3] [Carlos Bravo-Prieto et al., Quantum 4, 272 (2020).](https://quantum-journal.org/papers/q-2020-05-28-272/)
Pdf: /pdf/78b7b507db1c1e969b2e12b159ab37c39b821a6f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Explanation Shift: How Did Distribution Shift Impact the Model? | Reject | Summary: This paper introduces a new concept called "explanation shift" for detecting shifts in data distributions with the changes of the attribution distributions on machine learning models. The authors argue that current methods for detecting shifts have limitations in identifying changes in model behavior. Explanation shift provides more sensitive and explainable indicators for these changes. The paper also compares the proposed method against other methods for detecting distribution shifts in both synthetic and real datasets.
Strengths: + Leveraging the changes of explanations as a manner of detecting the distribution shift is a novel idea.
+ The authors provide a compreshensive analysis to show the connections between explanation shift and various distribution shifts, which could be helpful for readers to understand how to use explanation shift to detect distribution shift
Weaknesses: + The overall presentation is not clear and many key terminologies and notations are not well explained or defined. For example, in Equation (3), what is $x^*$? What is the formal definition of $S(f_{\theta},x)$? What are the definitions of "sensitivity" and "accountability" which are used as evaluation metrics in Experiments? The lack of clear presentations of these terms makes me extremely hard to understand the key information in this paper
+ Although the authors proposed a new concept called "explanation shift", the technical contribution is still very limited. First of all, the method proposed for detecting the explanation shift (i.e., Section 3) is very simple. But the authors failed to justify why this is an effective method from the theoretical perspective by comparing it against other methods.
+ Some key empirical studies are missing. In Section 5.3, the authors evaluate their methods on some real datasets to detect novel group distribution shift and geopolitical and temporal shift. However, the authors did not perform the same experiments by using baseline methods. Thus it is unclear whether those baseline methods can discover the same types of shifts or not. If yes, then what are the benefits of the proposed method? If not, why are those baseline methods unable to find out those shifts?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: + Some notations and terminologies should be clearly defined. For example, in Equation (3), what is $x^*$? What is the formal definition of $S(f_{\theta},x)$? What are the definitions of "sensitivity" and "accountability" which are used as evaluation metrics in Experiments?
+ In terms of the experiments on the real datasets, can the baseline methods discover the same types of shifts?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Some key empirical studies are missing. In Section 5.3, the authors evaluate their methods on some real datasets to detect novel group distribution shift and geopolitical and temporal shift. However, the authors did not perform the same experiments by using baseline methods. Thus it is unclear whether those baseline methods can discover the same types of shifts or not. If yes, then what are the benefits of the proposed method? If not, why are those baseline methods unable to find out those shifts?
Our main contribution is the use and analysis of explanation distributions. The classifier with the two-sample test is not novel per se; it represents the application of a two-sample test to the distribution of explanations.
As a baseline comparison method, we use C2ST on the input data distributions as it has been used in Lopez et al. and C2ST on the predictions.
The result of the baseline comparison method are: (1) compared to input data, that explanations incorporate information about the model and (2) compared to predictions, the distribution of explanations turns out to be more sensitive as the number of dimensions for the same information is higher, and thus, more precise shifts on the data can be detected earlier.
Both baseline studies are derived through mathematical analysis (Section 4), analytical examples (Appendix B), synthetic data (appendix C), Real Data (Section 5 and Appendix), varying Models and Hyperparameters (Appendix E) varying explanations(Appendix F).
The question of the paper is not just about how baseline methods can detect types of shift, it is about how they relate to the model. In the analytical examples in the Appendix, one can see the mathematical derivations, and in Appendix C, experiments on synthetic data evaluating against measures in other distributions
> Although the authors proposed a new concept called "explanation shift", the technical contribution is still very limited. First of all, the method proposed for detecting the explanation shift (i.e., Section 3) is very simple. But the authors failed to justify why this is an effective method from the theoretical perspective by comparing it against other methods.
Complementing the answer above. The contribution of the work is the conceptualization of a shift in the distribution of explanations to measure changes in the model, we have compared the same simple methodology to other distributions: input data and predictions. The comparisons are done from a mathematical analysis perspective, deriving from cases where Shapley values can be analytically calculated (Appendix B), Experiments on synthetic data (Appendix C) and different analysis on real data L229-235
> Some notations and terminologies should be clearly defined. For example, in Equation (3), what is x*?
Thank you for flagging terminology inconsistency. We will unify. x* is the instance to be predicted, it may be clearer without the *
> What is the formal definition of S(f,x)?
$S_j(f;x)$ is the short notion for $S_j(val_{f,x})$. "$S_j(f;x)$ stands for the SHAP value of the j'th feature of input x based on model f. We will make our formal description more precise."
There is also a formal definition 3.1 L52 Does this clarify?
>What are the definitions of "sensitivity" and "accountability" which are used as evaluation metrics in Experiments?
Accountability in Table 1 “We evaluate accountability by checking if the feature attributions of the detection method correspond with the synthetic shift generated in both scenarios”.
Sensitivity: reactivity to measuring the interaction between a distribution and the model. We will clarify this in the manuscript. Thanks for pointing it out.
> In terms of the experiments on the real datasets, can the baseline methods discover the same types of shifts?
In L79 we add a disclaimer, “In practice, multiple types of shifts co-occur together, and their disentangling may constitute a significant challenge that we do not address here”.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the authors' efforts in providing such detailed answers. I have read other reviewers' comments. I do feel that the authors do need to compare the proposed solution against the baseline methods in literature such as NDCG as mentioned by the reviewer jhhk. So I would maintain my score unless there are more experimental results from the authors.
---
Reply to Comment 1.1.1:
Comment: Many thanks for the reviewing. We have added a comparison against NDCG in the general comments. | Summary: This paper uses explanation shift as a way to detect different types of distribution shift between the training set and unseen (test) data sets. The method is based on measuring the changes between the explanation provided by an explanation approach such as Shapley values, for the two data sets for a trained model. As such, the two data sets could be statistically similar but appear different from the model’s perspective. Overall, the proposed approach is novel and interesting but the paper needs to be improved.
Strengths: To the best of my knowledge, this is a novel approach that uses explanation to detection distribution shift. The proposed method is clear and the method seems to be effective in practice.
Weaknesses: Section 4.1 provides examples where the proposed method works but simple distribution shift evaluation fails. But this does not provide any guarantee whether in general the proposed model is better or not. The same is true for Section 4.3. Section 4.2. provides a disposition but as mentioned by the authors, the prediction shift implies explanation shift, but the opposite is not true. Thus, no conclusion can be ae when there is an explanation shift.
Even though the authors compared their proposed model with the baselines on the synthetic data set in Section 5.1, they have not done it using any real data sets. The real data set is mainly used to study the sensitivity of the model on the parameters.
There is lack of consistency in notation used in the paper that makes it more difficult to follow. Notation changes from one section to another, and in some extreme cases from one example to another. Here are some instances:
1- Val function is defined differently in Equation 1 and 2.
2- Equation 3 is not clear and not explained either. What is the expected value is defined on? If it is X, why the notation differs from Equation 2?
3- There is a sign used in Example 4.2 which is not defined.
The paper benefits from a round of proof-reading.
Line 143: out approach –> our approach
Line 182: a hard tasks --> a hard task (the sentence that includes this is also not clear and needs explanation)
Line 279: AppendixE.1 --> Appendix E.1
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Check the Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
> Section 4.1 provides examples where the proposed method works but simple distribution shift evaluation fails. But this does not provide any guarantee whether in general the proposed model is better or not. The same is true for Section 4.3. Section 4.2. provides a disposition but as mentioned by the authors, the prediction shift implies explanation shift, but the opposite is not true. Thus, no conclusion can be ae when there is an explanation shift.
We want to measure changes in the distribution shift and their interaction with the model. Section 4.1. implies that prediction shift is not a reliable measure.
In sections 4.2 we state that shifts in input data are don't relate necessarily to changes on the model model and in section 4.3 that changes in the predictions don't necessarily measure distribution shifts that impact the model.
The mathematical examples aim to showcase situations where explanation shift achieves the desired result against input data shift or prediction shift. It aims to be a conceptual analysis that supports the later experimental one.
The experimental section aims to showcase in which situations is better, for this we have shown analytical, synthetic, and real data experiments. An example is that we have shown how varying hyperparameters of the model (Appendix E) affect explanation shift where distribution shift is not affected. We also perform experiments varying estimator (f). See L229 – 235.
In summary, the difference with input data shift is that explanation shift relies on the model, and for prediction shift, the higher number of dimensions containing the same information.
Explanation shift implies prediction shift but the other direction does not hold, helping us to provide theoretical evidence that many of the current methods that rely on model predictions are actually not reliable for measuring distribution shift that impacts the model.
> There is lack of consistency in notation used in the paper that makes it more difficult to follow. Notation changes from one section to another, and in some extreme cases from one example to another. Here are some instances:
>Val function is defined differently in Equation 1 and 2.
Eq. 1 shows the Shapley values for an arbitrary value function val, while eq. 2 defines the specific value function of SHAP-values.
> Equation 3 is not clear and not explained either. What is the expected value is defined on? If it is X, why the notation differs from Equation 2?
Here, $S_j(f;x)$ is the short notion for $S_j(val_{f,x})$ often seen in literature. We will add this to Eq. 3
> There is a sign used in Example 4.2, which is not defined.
We are not sure, which sign is unknown to the reviewer. We hope it is one of the following: ~ (is sampled from); ⊥ (is stochastically independent of); × (Cartesian Product) | Summary: Detecting shifts in data distribution between training and deployment is critical for ensuring models function as intended and operate in their domain of applicability. However, detecting such shifts is challenging. In this paper, the authors propose an approach based on techniques from the explainability literature. They define the concept of explanation shift and introduce an Explanation Shift Detector. They validate their approach on a synthetic data and 4 tabular datasets, demonstrating improved performance over a range of baselines.
Strengths: The paper is well written, the introduction well motivated, and the formalism both precise and easy for the reader to follow. I found the method interesting, and the analysis of explanation shift detailed and informative. I think this work is a meaningful contribution to the literature.
Weaknesses: The experiments were only conducted on several, relatively simple, tabular datasets. Demonstrating the method for another modality would strengthen the paper.
Please see Questions below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: L115-119 - Related work in explainability. Lundberg et al. is not the only work relating explainability and distributional shift. For example, Crabbe et al (2020) use example-based explanations to detect out-of-distribution samples, while Hinder et al. (2022) use contrasting explanations to explain concept drift.
Crabbé, Jonathan, et al. "Explaining latent representations with a corpus of examples." Advances in Neural Information Processing Systems 34 (2021): 12154-12166.
Hinder, Fabian, et al. "Contrasting explanation of concept drift." 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN. 2022.
Figure 1 – I think there is a typo in “Explain Explanation Shift Detector” (unclosed bracket).
L238-241 – It would benefit the reader to provide some additional explanation of some of the baseline methods, particularly those which don’t seem to be discussed elsewhere.
L258 – how does Table 1 “show the results of [your] approach”? More generally Table 1 seems like it would fit more naturally with the discussion of related work.
L263,L265 – I think left and right in Figure 2 have been switched.
Figure 2, right – It is stated in the caption that “good indicators should follow a progressive stead positive slope…”, however this is not discussed in the text. I think the discussion would benefit from this, since it might not be immediately obvious to the reader.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for helpful comments.
>L115-119 - Related work in explainability. Lundberg et al. is not the only work relating explainability and distributional shift. For example, Crabbe et al (2020) use example-based explanations to detect out-of-distribution samples, while Hinder et al. (2022) use contrasting explanations to explain concept drift.
Many thanks for the related work that can help us better position the work. We will definitely add them in Appendix A, where we compare them to more existing works and consider adding them to the main body.
> Demonstrating the method for another modality would strengthen the paper.
Applying explanation distributions to other data modalities is not straightforward due dimensionality of the distributions (e.g. NLP). We have stated this limitation of the scope of the paper.
Extending to other data modalities remains a further avenue of research where we will need novel methods to measure explanation shift.
–
Concerning the rest of the comments, we fully agree with the reviewer and will address them adequately.
---
Rebuttal Comment 1.1:
Comment: I have read all reviews and responses from the Authors and I thank the Authors for their responses.
I would like to retain my original (positive) evaluation. | Summary: The submission proposes an approach to improve model monitoring by rather evaluating changes in explanations instead of input features. The authors provide synthetic examples to justify their method and compare it empirically to existing strategies on tabular datasets.
Strengths: - The paper addresses an important topic as effective model monitoring based on unlabeled data only is a relevant problem.
- Although rather, simplistic the synthetic examples help to get a rough idea about the potential benefits of explanation monitoring.
- The authors provide code as well as tutorials on how to apply their method to ensure reproducibility.
Weaknesses: - The theoretical analysis is extremely limited such that the overall assumptions under which the proposed method can be expected to yield actual benefits are too vague. Also, the basic notations section seems a bit inflated.
- I think the novelty is limited as well. Monitoring feature attributions instead of input data is not new and is already offered by popular ML service providers. See for instance here the functionality implemented by Google (https://cloud.google.com/vertex-ai/docs/model-monitoring/monitor-explainable-ai). I would have also liked to see such an alternative approach to use explanations for monitoring somewhere included in the experiments.
- The conducted numerical experiments are not sufficient to demonstrate the benefits of the proposed approach. If only considering tabular data I think including more than 3 actual datasets and 4 prediction tasks is necessary to be convincing. This is especially true for methods where rigorous theoretical analysis is challenging. See also the question below for further suggestions.
- The evaluation section is hard to follow, and lacks formulation of insights derived from the experimental results, e.g., it is unclear what benefits can be derived from the feature importance in Figure 4. Given the lack of baselines and justification for those explanations, it is also not clear if they represent useful insights into the effect of a distribution shift on the model’s behavior. Also, Table 1 comes out of nowhere and is not described sufficiently.
- Given the limited theoretical and empirical investigation the submission does in my opinion not make a significant contribution to the field.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Why are only observational Shapley values considered? Have there also been conducted experiments based on the interventional Shapley Values? The interventional approach is more applicable in general and has been proven to result in Shapley values that are closer to explaining the true model behavior [1,2]. I would imagine that this should also have an impact on how distribution changes are reflected in the explanations.
Has it also been tried to apply explanations techniques like SHAP and LIME to the two-sample test classifier instead of the presented approach in Figure 4 (right)? This would be much more general.
Have all of the mentioned baseline methods also been applied to the considered real datasets? This is needed for a comprehensive evaluation.
Why are the investigations only limited to tabular data? Explanations are extremely popular for vision models and even Shapley values can also be approximated quite efficiently when basic feed-forward neural networks are used. Adebayo et al. [3] evaluate a variety of different explanation methods on vision models for in-domain and out-of-domain instances. Their setup is a little different but maybe the authors can get some inspiration regarding similar experiments for explanation shifts on image data.
Explanations are unstable and might change significantly already for minor input perturbations [4] or other desirable sanity checks that are not satisfied [5]. I wonder whether these phenomena also impact the capabilities of explanation monitoring and would have liked to see a corresponding investigation.
[1] Janzing, Dominik, Lenon Minorics, and Patrick Blöbaum. "Feature relevance quantification in explainable AI: A causal problem." International Conference on artificial intelligence and statistics. PMLR, 2020.
[2] Chen, Hugh, et al. "True to the model or true to the data?." arXiv preprint arXiv:2006.16234 (2020).
[3] Adebayo, Julius, et al. "Debugging Tests for Model Explanations." Advances in Neural Information Processing Systems 33 (2020): 700-712.
[4] Alvarez-Melis, David, and Tommi S. Jaakkola. "On the robustness of interpretability methods." arXiv preprint arXiv:1806.08049 (2018).
[5] Adebayo, Julius, et al. "Sanity checks for saliency maps." Advances in neural information processing systems 31 (2018).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: I appreciate the discussion at the end of the paper that hints at some relevant limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We would like to thank the reviewer for helpful comments.
> Why are only observational Shapley values considered? Have there also been conducted experiments based on the interventional Shapley Values? The interventional approach is more applicable in general and has been proven to result in Shapley values that are closer to explaining the true model behavior [1,2]. I would imagine that this should also have an impact on how distribution changes are reflected in the explanations.
In our experiments, we found that the statistical differences between observational and interventional don’t relate to statistically significant changes.
See attached pdf that we will add to the Appendix of the paper.
> Monitoring feature attributions instead of input data is not new and is already offered by popular ML service providers. See for instance here the functionality implemented by Google (https://cloud.google.com/vertex-ai/docs/model-monitoring/monitor-explainable-ai). I would have also liked to see such an alternative approach to use explanations for monitoring somewhere included in the experiments.
While commercial ML service providers may offer monitoring services utilizing explainable AI, the context of our research is different. Our study aims to contribute to the academic community and the field of machine learning research. The distinction lies in the research's methodology and adherence to a principled investigation and academic standards. To the best of our knowledge, our work is the first to use explanation distribution to investigate the relation between distribution shift and ML models. Thus, our research broadens the spectrum of data analytics approaches to distribution shift detection.
Google offers an individual feature monitoring service, which coincides with the research of Lundberg (https://arxiv.org/pdf/1905.04610.pdf), section 2.7.4 “Local model monitoring…” Our work builds on their previous research line (L115) and we propose the usage of explanation distribution and C2ST. We provide (i) mathematical derivations (ii) synthetic as well as (iii) real-world data examples of differences between shift of explanation distributions vs. shift of input data and vs. shift of model predictions.
Model monitoring, using explanation deviations as a proxy for model performance degradation, in the absence of labelled data, is a particularly challenging task, where no estimator will perform better consistently, particularly on tabular data L309-L316.
We also provide open-source software, allowing the community to use additional data analytics tools complementing those of big tech companies and ML service providers.
> I would have also liked to see such an alternative approach to use explanations for monitoring somewhere included in the experiments. Has it also been tried to apply explanations techniques like SHAP and LIME to the two-sample test classifier instead of the presented approach in Figure 4 (right)?
In Appendix F, “LIME as an Alternative Explanation Method” we have added experiments using LIME. The results performance results are fairly similar for both synthetic and natural data. The biggest difference seems to be in runtime, as wall time increases.
Even though there could be theoretical differences between different SHAP value estimations (e.g., interventional/observational), our empirical analysis shows that there are few to none. LIME, which is a distinct feature attribution method, leads to similar results. Changes to the model hyperparameters (Appendix E.2 - Figure 9) are much more impactful.
We have selected Shapley values (and not LIME) because of the theoretical properties that Shapley values have; those properties allow us to develop the mathematical analysis (on Appendix B). We also acknowledge the limitations of our approach and have correspondingly described potential future research venues.
> Explanations are unstable and might change significantly already for minor input perturbations [4] or other desirable sanity checks that are not satisfied [5]. I wonder whether these phenomena also impact the capabilities of explanation monitoring and would have liked to see a corresponding investigation.
Even though explanations are unstable, we deal with shifts in explanation distributions, this is more robust than single-instance explanations (local). For example, the related work, [4], shows explanation instability in local explanations (see their Fig. 2 and Fig. 3). For the related work [5], the same effects can be observed, and in this case, the authors focus on image data, where feature attribution methods are less successful than in tabular data.
> Why are the investigations only limited to tabular data?
Our methodology is limited to tabular data. SHAP values are more reliable and stable in tabular data. Extending to Image or Text data is not straightforward (due to local feature dependencies and much higher explanation dimensions). This, however, remains on our list of further research and is out of scope for this paper. We could further clarify this limitation in the discussion.
> Have all of the mentioned baseline methods also been applied to the considered real datasets?
In Appendix D, there are further experiments on real-world data. We will be happy to extend the Appendix if the reviewer thinks it is needed.
Note that there are also more experiments in the tutorials of the Python package.
> The conducted numerical experiments are not sufficient to demonstrate the benefits of the proposed approach. If only considering tabular data I think including more than 3 actual datasets and four prediction tasks is necessary to be convincing.
Besides numerical experiments, we provide mathematical analysis, derivations on simple cases, synthetic experiments and open-source code. The experiments are done across different types of angles on those datasets L229-235
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I have carefully read the authors' rebuttal and my main concerns still hold, in particular regarding novelty and lack of experiments. The authors claim there is no academic work on monitoring feature attributions, as offered by different ML service providers. However, Nigenda et al. presented a detailed description of the algorithm implemented in SageMaker as part of a full KDD paper last year (full ref. below). In particular, in section "4.4 Detecting drift in model feature attributions", they describe how they use a Normalized Discounted Cumulative Gain (NDCG) score for comparing the feature attribution rankings of training and distribution shift data. In comparison to this approach, the novelty of the submission is that instead of NDCG, the authors use a two-sample classifier (which also was proposed previously) to process the shifts in feature attributions.
I do appreciate the open-source implementation provided by the authors as this is really something of value to the community - but as is, I think this nice piece of software is the main contribution here. For a fully fledged research paper, however, I would expect at the very least a comparison to NDCG.
I therefore maintain my score.
David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, and Krishnaram Kenthapadi. 2022. Amazon Sage- Maker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models. In Proceedings of the 28th ACM SIGKDD Con- ference on Knowledge Discovery and Data Mining (KDD ’22), August 14–18, 2022, Washington, DC, USA. ACM, New York, NY, USA, 11 pages
---
Reply to Comment 1.1.1:
Comment: Many thanks for pointing out the related work. Also many thanks for appreciating our open source software compared to ML software providers such as Google or Amazon.
In the less than a page subsection of the mentioned paper, authors propose a similar approach to Lundberg, but quantified using NDCG between the feature attribution change.
The contribution does not provide any further analysis of why the proposed method works.
The scope of the paper is also a bit distinct, focusing on MLOps software for model monitoring and model retraining rather than the techniques used.
Some distinctions:
- They focus on monitoring model performance. Monitoring model deterioration, particularly on tabular data, is an impossible task where no method will achieve consistent optimal results. We have discussed this aspect, and we focus instead on “How did the distribution shift impact the model.”
- Their work does not provide any mathematical analysis of why it works.
- Their experimental part is done only on one dataset under synthetic shift.
- The experiment only uses a Logistic Regression. We compare several types of algorithms.
A simple example where their method would not work and ours will is a basic monotonous and uniform covariate shift. $X^{new}_j = X^{tr}_j + 1$ for every feature(j) of the dataset. Their method will find that the feature attribution order is the same (false negative) while ours will be a true positive.
Also, if there is a univariate shift in the most relevant feature, there is no guarantee that the method will detect it. Similar hold for the less important feature.
We will extend the experiments of the main body to include this method and the related work. If the reviewers found it necessary, we will happily provide a comparison in Appendix H between the two papers in mathematical analysis, synthetic data experiment and natural data.
Even though both papers handle similar issues, our contribution differs in scope, width of analysis, methods, and depth. | Rebuttal 1:
Rebuttal: Experiments comparing Interventional vs Observational SHAP value calculations. To be added to the appendix.
Pdf: /pdf/2dd5fa064c6a1c752086d20db2125f77c2737253.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Efficient Symbolic Policy Learning with Differentiable Symbolic Expression | Accept (poster) | Summary: # I have reviewed this draft once before.
This paper proposes a meta-RL method to generate explainable symbolic policies. ESPL contains a symbolic network in the search space and a path selector to find the compact symbolic policy.
Strengths: I think there exists some novelty in ESPL because it contains symbolic + neural structures. Also, the symbolic policies induced by ESPL seem to be effective. The number of experiments is enough. The visualizations are cool.
Weaknesses: Post rebuttal: I upgrade my score to a borderline accept because of the reasonable rebuttal.
Although I originally leaned to accept this paper, I think the suggestions from the previous conference are not incorporated, and the draft is not improved much, so this time, I have to vote for rejection.
The major concerns are:
(1) The proposed symbolic policy looks messy and it does not improve the interpretability much. Also, there could be many optimal policies.
(2) The CatPole and other approaches have guaranteed optimal policy. The theoratcal analysis is missing.
(3) The authors should try other discrete environments or complicated environments with rich semantics.
I feel sorry for this paper. It was very close to acceptance last time but the AC insisted that the novelties were not enough. I don't think a resubmission without major modifications can get this paper in.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Standard errors should be shown.
2. Table3 is a waste of space.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not discussed.
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: Thanks a lot for your advice on further improving this paper. We would like to discuss them one by one. Any further discussion will be appreciated.
>**Q1.** I think the suggestions from the previous conference are not incorporated, and the draft is not improved much.
Thank you again for the time and effort invested in the review process. We highly value the insightful feedback from the reviewers, which has been immensely beneficial to our work. We have diligently incorporated your suggestions provided during the previous conference review process. These revisions encompass additional discussions of related works (including NAS and Programmatical RL), experiments conducted in a more complex environment (TORCS), and comparisons with some Programmatical RL methods. Due to page limitations of NeurIPS, these modifications have been placed in the appendix. The specific revisions include:
1. Discussion of Programmatical RL and program-guided in Appendix E.1.
2. Discussion of Neural Architecture Search in Appendix E.2.
3. Comparison with Programmatical RL (NDPS,VIPER,PROPEL,PRL) in Appendix F.1.
4. Experiments in TORCS in Appendix F.1.
>**Q2.** The proposed symbolic policy looks messy and it does not improve the interpretability much. Also, there could be many optimal policies.
1. The symbolic policies we obtained have a similar complexity compared to the previous works. In addition, the symbolic policies allow people to directly see what factors of the state are involved in the choice of action, as well as the rough relationship between the action and these factors. Besides, the superior interpretability to NN-policies has been discussed in previous works.
2. Despite the potential existence of optimal policies in certain environments, as far as we know, obtaining an analytical expression for such policies can be quite challenging or even infeasible. Even if attainable, it requires domain-specific knowledge and intricate computations. In contrast, our approach does not necessitate prior knowledge about the environment; it autonomously learns symbolic policies through interaction with the environment.
>**Q3.** The CatPole and other approaches have guaranteed optimal policy. The theoretical analysis is missing.
For complex nonlinear systems, we cannot obtain an analytical optimal policy. For simple systems like cartpole, since we can first know its dynamics and equilibrium points, one approach is to **approximate a linear system near the equilibrium point**, and use a linear quadratic regulator (LQR) for control. However, for other more complex environments, this becomes challenging to achieve. In contrast, our approach doesn't necessitate knowledge of the system's dynamics and can automatically learn symbolic policies.
We provide the theoretical analysis of CartPole in the global rebuttal.
>**Q4.** The authors should try other discrete environments or complicated environments with rich semantics.
In this paper, we focus on tasks with continuous action space following previous works. For benchmark selection, we used the same benchmarks as in DSP to ensure a fair comparison, and we provided additional experimental results on other environments (TORCS) in the appendix.
> **Q5.** It was very close to acceptance last time but the AC insisted that the novelties were not enough. I don't think a resubmission without major modifications can get this paper in.
This is indeed frustrating, as the author was unable to see the discussion, we do not what happened in the final discussion. However, from the Paper Decision written by the AC, we did not see any content related to novelty. In fact, apart from summarizing the reviewers' opinions, the AC believes that there is a line of transformer-based SR methods after DSP that can also improve data efficiency should be compared. However, this is factually incorrect. Symbolic regression tasks and obtaining symbolic policies in reinforcement learning tasks are not the same thing. For symbolic regression tasks, one needs to regress expressions from the data generated by a certain symbolic expression, while symbolic policy learning aims to obtain symbolic policies that maximize cumulative rewards. Specifically, transformer-based SR requires ground-truth expressions during training, which might be suitable for supervised learning but is unreasonable for reinforcement learning. Unfortunately, we did not have the opportunity to rebuttal.
Due to the page limitation, most of our modifications are in the appendix. We have added discussions on related works of NAS and Programmatical RL and conducted experiments on a more complex environment (TORCS). We have also included comparisons with some Programmatical RL methods.
>**Q6.** Standard errors should be shown.
In the table below, we give the mean and standard deviation of ESPL's episode rewards in the benchmark, but because previous works did not provide the standard deviation, we did not compare it in the table.
| | CartPole | MountainCar | Pendulum | InvDoublePend | InvPendSwingup | LunarLander | Hopper | BipedalWalker |
|---|---|---|---|---|---|---|---|---|
| ESPL | 1000$\pm$0.0 | 94.02$\pm$0.3 | -151.72$\pm$101.31 | 9359.9$\pm$1.54 | 890.36$\pm$38.80 | 283.56$\pm$20.35 | 2442.48$\pm$25.22 | 309.43$\pm$0.34 |
>**Q7.** Table3 is a waste of space.
Table 3 illustrates the number of episodes required for training. In contrast to DSP, ESPL utilizes significantly less data, showcasing its remarkable efficiency.
---
Rebuttal Comment 1.1:
Title: More detailed comments, and feedbacks to the authors' rebuttal.
Comment: Thanks to the authors for their rebuttals. Here are the extended comments and feedbacks on each point.
> 1) The proposed symbolic policy looks messy and it does not improve the interpretability much. Also, there could be many optimal policies. Although this paper claims to improve interpretability, however, the learned representations are still very noisy. I'm not sure whether this claim is true.
I see that the authors added human experiments. However, honestly speaking I am not convinced by human experiments conducted in such a small temporal period.
Besides, I'm more curious why the proposed approach is better than DSP in efficiency. I think both approaches use reinforcement learning, but the authors report that the proposed approach uses much fewer episodes to learn the policy. I'm not sure where the benefits come from.
Do the authors compare DSP with an implementation of https://github.com/brendenpetersen/deep-symbolic-optimization? The authors say "These methods cannot be directly applied to learning symbolic policies." It is clear that their Github page contains applications in symbolic policy learning. I'm not sure whether the statement is true or not.
> (2) Previous works show that the CartPole and other environments have guaranteed optimal policies [1]. I think the authors should compare their interpretability with [1]. More theoretical analysis of interpretability is encouraged, as previous papers show some proof of the quality of learned policy in simple environments. A smarter way is to not claim superiority in interpretability if the experiments fail to demonstrate this advantage.
I think it is better to provide comparison demos (videos or other formats) instead of human evaluations. However, Table 1 only contains results from ESPL but does not show results from related baselines. It is noteworthy that the DSP paper contains several comparisons in Table 1.
> (3) The authors should try other discrete environments or complicated environments with rich semantics. I believe the authors should try on MineCraft [2] or other higher-level environments. These semantic-rich environments require more interpretability. The logic is, we do not need a very interpretable policy in the low-level control tasks, such as PID. However, in higher-level planning tasks, we do need more interpretability.
The authors respond that they focus on continuous action space. I think it is better to conduct experiments in discrete cases as well.
[1] Verma, Abhinav, et al. "Programmatically interpretable reinforcement learning." International Conference on Machine Learning. PMLR, 2018. [2] Sun, Shao-Hua, Te-Lin Wu, and Joseph J. Lim. "Program guided agent." International Conference on Learning Representations. 2019.
---
Reply to Comment 1.1.1:
Title: Thanks For Your Replay!
Comment: >**Q1.** I see that the authors added human experiments. However, honestly speaking I am not convinced by human experiments conducted in such a small temporal period. Besides, I'm more curious why the proposed approach is better than DSP in efficiency. I think both approaches use reinforcement learning, but the authors report that the proposed approach uses much fewer episodes to learn the policy. I'm not sure where the benefits come from.
(1) The Human-study showed that the symbolic policies we learned can achieve close interpretability compared to DSP, and the experiment was conducted before the review comments were received.
(2) It's important to note that our approach of **reinforcement learning is completely different from DSP**. DSP regards generating a symbolic policy as a sequential decision-making task and employs reinforcement learning to solve it, with the symbolic operation set as the action space. For the reinforcement learning process in DSP, a single reward (or in other word a single training example) requires interactions with the environment over multiple episodes (an episode may involve hundreds or thousands of interactions steps). In contrast, in our method, the learning of symbolic policies is gradient-based and **doesn't involve constructing any additional sequential decision-making process**. Each step of interaction of the symbolic policy and the environment serves as direct training example. Additionally, we have developed an off-policy training approach, which allows each training example to be used multiple times during training. Furthermore, when dealing with multi-dimensional actions, DSP needs to construct multiple sequential decision tasks. Simultaneously optimizing these tasks is challenging, leading them to use pre-trained neural network policies as anchor models and alternately optimizing each dimension of action. In contrast, we can optimize symbolic policies for all action dimensions simultaneously.
Our approach is the first to require no prior environmental knowledge, no need for pre-trained policies, and offers a fully automated method for discovering symbolic policies.
>**Q2.** Do the authors compare DSP with an implementation of https://github.com/brendenpetersen/deep-symbolic-optimization? The authors say "These methods cannot be directly applied to learning symbolic policies." It is clear that their Github page contains applications in symbolic policy learning. I'm not sure whether the statement is true or not.
(1) Due to incomplete hyper-parameters of constant optimization in the DSP paper, reproducing the results is challenging. We utilized the same benchmark as DSP, allowing for a direct comparison with the results presented in the DSP paper.
(2) It is important to note that the portion of symbolic policy learning in the project found at https://github.com/brendenpetersen/deep-symbolic-optimization is exactly the official implementation of DSP. Symbolic policy learning and symbolic regression tasks are distinct. Symbolic regression aims to discover symbolic expressions that best fit a dataset using a fixed function. However, symbolic policy learning involves learning from interactions within an environment and has no access to a predefined dataset or symbolic expression. Consequently, applying symbolic regression methods directly to symbolic policy learning is not feasible. For instance, DSP required extensive modifications to apply the symbolic regression method [1] to symbolic policy learning.
[1] Petersen, B. K., et al. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. Proc. of the International Conference on Learning Representations, 2021. | Summary: This paper proposes to apply differentiable symbolic regression for policy learning. It shows promising results in multiple RL environments (including good average performance and learned interpretable symbolic policies).
Strengths: This paper shows promising results of differentiable symbolic regression in eight RL environments. The model design is intuitive and is shown to achieve good average performance, learn interpretable symbolic policies, and be less likely to overfit for OOD generalization in the meta-RL setting. The model is also more sample efficient than a previous neural symbolic regression model for policy learning.
Weaknesses: Nevertheless, the model is less novel as similar techniques have been explored in domains such as differentiable interpreters (e.g., Terpret [1] or DiffForth [2]) for program synthesis and Dart [3] for neural architecture search/pruning.
There are also concerns regarding the experiments:
* Symbolic methods are reported to perform better than neural ones even for single-task RL settings, which is counter-intuitive. There should not be overfitting problems in the naive setting, as far as I understand. Why would symbolic methods outperform neural ones?
- I find this in appendix, lines 123-124, which seems relevant but could be unfair:
> For all the environments, the proposed ESPL performs 3 independent training runs and selects the single best policy.
- I also find that they may report averaged performance over multiple runs, which may not be necessary or standard for the single-task RL setting.
* Overall, the descriptions regarding evaluation procedures are vague, e.g.,
- it's unclear whether the final policies evaluated in Table 2 are symbolic or not; and
- how to select policies in Table 1;
[1] Gaunt, Alexander L., et al. "Terpret: A probabilistic programming language for program induction." arXiv preprint arXiv:1608.04428 (2016).
[2] Bošnjak, Matko, et al. "Programming with a differentiable forth interpreter." International conference on machine learning. PMLR, 2017.
[3] Liu, Hanxiao, Karen Simonyan, and Yiming Yang. "Darts: Differentiable architecture search." arXiv preprint arXiv:1806.09055 (2018).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * Why would the symbolic methods outperform the neural ones in the single-task RL settings?
* Are there results in more complex/practical environments, e.g., with larger input and output dimensions or longer expressions?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review. We are glad to discuss your concerns one by one. Any further discussion will be appreciated.
>**Q1.** Nevertheless, the model is less novel as similar techniques have been explored in domains such as differentiable interpreters (e.g., Terpret [1] or DiffForth [2]) for program synthesis and Dart [3] for neural architecture search/pruning.
A1. The proposed framework is novel and is different from previous works. Building differentiable frameworks and using gradient-based methods to obtain solutions have indeed found applications in many fields. However, compared to ESPL, these works differ significantly in terms of (1) the constructed differentiable frameworks, (2) the process of obtaining solutions through gradients, and (3) the specific problem domains they target. For example, Darts formulate a super-network based on the continuous relaxation of the neural architecture representation, while we employ a densely connected symbolic network. Darts uses the softmax function to select one proper neural network operation from a set of candidate operations for each connection. In contrast, we learn to mask out redundant connections with the proposed path selector and only a very small percentage of connections are selected. Darts aims to search for a good neural network structure for computer vision while we aim to find compact symbolic policies for reinforcement learning. In Appendix E2, we have compared our proposed method with Neural Architecture Search approaches. We will add these works into the reference.
>**Q2.** Symbolic methods are reported to perform better than neural ones even for single-task RL settings, which is counter-intuitive. There should not be overfitting problems in the naive setting, as far as I understand. Why would symbolic methods outperform neural ones? I find this in appendix, lines 123-124, which seems relevant but could be unfair:'For all the environments, the proposed ESPL performs 3 independent training runs and selects the single best policy.' I also find that they may report averaged performance over multiple runs, which may not be necessary or standard for the single-task RL setting.
A2. In fact, for a fair comparison, we adopted the **exact same testing procedures** as in DSP[1] which is the SOTA of symbolic policy learning and the most important baseline, including selecting from 3 independent training runs and averaging over multiple runs. It is worth noting that neural network policies are not necessarily superior to symbolic policies, and this is also evident from the experimental results in DSP[1]. One possible reason is that symbolic policies can represent complex relationships through combinations of different symbolic operations. When the symbolic expression is already sufficient to represent the policy, optimizing fewer parameters may be simpler and more effective.
[1]Landajuela, Mikel, et al. "Discovering symbolic policies with deep reinforcement learning." International Conference on Machine Learning. PMLR, 2021.
>**Q3.** Overall, the descriptions regarding evaluation procedures are vague, e.g., it's unclear whether the final policies evaluated in Table 2 are symbolic or not; and how to select policies in Table 1;
A3. The average episode rewards of ESPL presented in Table 2 are obtained by evaluating the final obtained **symbolic policies**. Additionally, the symbolic policy given in Table 1 is the one used for evaluation. We followed the same method as in DSP to select the symbolic policy, performing independent training runs and selecting the single best policy (see Appendix lines 123-124). We will add a description in revision to make it clearer.
>**Q4.** Are there results in more complex/practical environments, e.g., with larger input and output dimensions or longer expressions?
A4. In fact, for a fair comparison, we used the exact same benchmark as in DSP (line 240-241). This benchmark includes classic control tasks such as CartPole and complex tasks like BipedalWalker, whereas earlier symbolic policy works were only evaluated on simpler tasks, such as CartPole or 1-DOF robot arm. Additionally, we included some additional experimental results (more environments and comparisons) in Appendix F.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. I have read the rebuttal but, unfortunately, cannot find convincing results to adjust my rating. In more detail,
* regarding results and evaluation, following one previous work may not be sufficient to justify the evaluation procedure. It would be better if the authors could justify more from the perspective of tasks' needs, etc.
* regarding novelty, I was aware of the differences as the authors listed in the rebuttal, but do not think they are significant. As stated in my review, I consider this work as old methods/principles applied in new domains (therefore *less* novel).
Overall, with these concerns in mind, I will keep my rating as 5 (borderline accept) for now.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply!
Comment: Thanks for your insightful feedback and for taking the time to communicate with us.
> **Q1.** regarding results and evaluation, following one previous work may not be sufficient to justify the evaluation procedure. It would be better if the authors could justify more from the perspective of tasks' needs, etc.
1. The DSP is, in fact, the SOTA in the symbolic policy domain and is the most essential baseline currently. To validate the effectiveness of our method, we employed benchmarks of DSP and compared our results against the scores presented in the DSP paper. To ensure fairness in comparison, we have to utilize the same testing methodology as employed by DSP. Furthermore, in the meta-RL experiments, for a fair comparison, we did not select symbolic policies. We adopted the same testing methodology as PEARL.
2. Due to the stochastic nature of environment initialization, averaging over multiple runs better reflects the algorithm's performance. Moreover, in the field of symbolic or programmatical policy learning, select symbolic policies from multiple independent runs is a common practice[1][2][3][4], aiding in preventing the search from being trapped in local optima.
[1] Jiˇrí Kubalík, Eduard Alibekov, and Robert Babuška. Optimal control via reinforcement learning with symbolic policy approximation. IFAC-PapersOnLine, 50(1):4162–4167, 2017. 20th IFAC World Congress.
[2] Verma, Abhinav, et al. "Programmatically interpretable reinforcement learning." International Conference on Machine Learning. PMLR, 2018.
[3] Bastani, Osbert, Yewen Pu, and Armando Solar-Lezama. "Verifiable reinforcement learning via policy extraction." Advances in neural information processing systems 31 (2018).
[4] Landajuela, Mikel, et al. "Discovering symbolic policies with deep reinforcement learning." International Conference on Machine Learning. PMLR, 2021.
> **Q2.** regarding novelty, I was aware of the differences as the authors listed in the rebuttal, but do not think they are significant. As stated in my review, I consider this work as old methods/principles applied in new domains (therefore less novel).
A2. Building differentiable frameworks and using gradient-based methods to obtain solutions is a conceptual approach rather than a one-size-fits-all solution. There is quite a bit of works in the NAS field alone using this approach. But it does take much effort to design algorithms for specific problems using this approach. In this paper, our novelty is reflected in the design of symbolic network structure, probabilistic path selector and corresponding training algorithm to build the first-ever differentiable symbolic policy learning framework for efficient policy learning.
Thank you again for your insightful feedback and constructive comments. | Summary: The paper "Efficient Symbolic Polmicy Learning via Gradient Descent" proposes a new neural symbolic architecture for agents learned via reinforcement learning. Authors propose an architecture with an alternance of symbolic and linear layers. To finally obtain simple expressions, a probabilistic mask is learned altogether with parameters, enabling the production of sparse archiectures resulting in small expressions. The approach is experimented on classical RL settings, and also on meta-rl ones.
Strengths: - Very interesting architecture
- Proposals look innovative
- Clever way to obtain sparsity
- Convincing experiments about efficiency
Weaknesses: - I cannot understand, if the architecture is new, why authors did not experiment it on classical supervised learning (eg, on classification tasks) before such much more reinforcement learning setting. Is the architecture only new in the field of RL ? Is the main contribution of the paper the archiecture or its application to RL and meta-RL ?
- I feel that lacks important details about the used archiectures (for instances q is not defined, nothing is said about the choice of symbolic operators, nor about their numbers n and m, etc.)
- Not much results about explanability, which is one of the main goal of the proposal
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Is the main contribution of the paper the archiecture or its application to RL and meta-RL ? what about applying it on supervised learning tasks ?
- How is q defined ? is it a deterministic output from a lstm or a gaussian from a transformer or ...? How is it trained ?
- How are chosen the sequence of symbolic operators used in the architecture and what are they in the experiments ? how many (please specify m and n)
- From my understanding, since we have m+2n inputs in the symbolic layer and only m+n outputs, the architecture has a with that reduces with layers. is it true ?
- What is the impact of lmin in (5) ? Do you have experiments to analyze it ?
- The metaRL objective given in the end of the first paragraph of page 6 is difficult to understand: the context is both sampled from the policy and used by the policy. Could you give clearer insights about this strange cycling definition ? Also, I would have expect that there is for each new task an exploration phase that would not been considered in the quantity to maximize : for k episodes no rewards, only collecting experience, such that we are good for the k+1th one in expectation depending on contexts from the k first episodes. Wouldn't it be better to present it like this ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the thoughtful comments. We would like to clarify the concerns as follows:
>**Q1.** I cannot understand, if the architecture is new ...
A1. (1) This paper aims to propose an efficient and effective method for learning symbolic policies. Symbolic policies are designed for sequential decision tasks, which are typically addressed using reinforcement learning methods rather than supervised learning.
(2) This approach is innovative and not a direct application of existing method of reinforcement learning.
However, it cannot be directly applied to supervised learning tasks. Supervised learning tasks (e.g. image classification, machine translation) have very different input and output forms compared to sequential decision-making tasks. As for the symbolic network, we drew inspiration from prior works (line 116) in the field of symbolic regression. However, our symbolic network architecture, constraints on symbolic operations, training process, and the process of obtaining compact expressions are distinct. These methods cannot be directly applied to learning symbolic policies.
(3) The contributions of this paper are three-fold. First, we introduce a novel gradient-based symbolic policy learning algorithm named ESPL that efficiently learns symbolic policies from scratch. The novelty is reflected in new symbolic network architecture, regularized operators, probabilistic path selectors, new loss functions, and training procedures, among others. Next, with ESPL, we develop the contextual symbolic policy for meta-RL, enabling the generation of symbolic policies for unseen tasks. This is a capability not achieved by any previous symbolic policy learning methods. Finally, we summarize our empirical results, which demonstrate the advantages of ESPL in both single-task RL and meta-RL scenarios.
>**Q2.** I feel that lacks important details about the used ...
A2. The Q-function (also known as the state-action-value function) is a crucial function in reinforcement learning, serving as a means to assess the expected return of different actions taken in a certain state. The Q-function is also referred to as the critic, and we update its parameters using the SAC algorithm, with pseudocode provided in Appendix C. The selection of symbolic operators is detailed in Section 3. To ensure a fair comparison, we adopt the same set of symbolic operations as in DSP. The architecture of the symbolic network used in experiments follows the structure shown on the right side of Figure 1 (lines 154-155). The values of m and n can also be derived from the network structure. For instance, if the next-layer symbolic operations are $[exp, log, mul, div]$, then m=2 and n=2, where m represents the number of unary functions and n represents the number of binary functions (line 123).
>**Q3.** Not much results about explainability ...
A3. (1) We give examples of interpretable analysis in Section 5.4 and report the symbolic policy lessons learned in Table 1. The symbolic policies allow people to directly see what factors of the state are involved in the choice of action, as well as the rough relationship between the action and these factors.
(2) It has been a consensus in previous works that using symbolic expression forms of policies has better interpretability [1][2][3]. In this paper, our main motivation is to design an **efficient** symbolic policy learning method, which greatly reduces the number of interaction episodes required for learning symbolic policies and extends the application of symbolic policies to the field of meta-RL.
(3) To further measure the interpretability of the learned symbolic policies, we present a human-study in the global rebuttal.
[1] Discovering symbolic policies with deep reinforcement learning. ICML 2021
[2] Optimal control via reinforcement learning with symbolic policy approximation.
[3] Interpretable policies for reinforcement learning by genetic programming.
>**Q4.** Is the main contribution of the paper the architecture ...
A4. We outlined our contributions in A1. It is also possible to apply symbolic approaches to solve supervised learning problems (such as image classification and machine translation), but our symbolic policy learning method cannot be directly employed for this purpose.
>**Q5.** How is q defined ? ...
A5. For the definition and training methodology of Q, please refer to A2. In the experiments, the network structure for Q is a multi-layer perceptron (MLP) with a hidden layer size of [256, 256].
>**Q6.** How are chosen the sequence of symbolic ...
A6. Please refer to A2.
>**Q7.** From my understanding, since we have m+2n inputs in the symbolic layer and only m+n outputs, the architecture has a width that reduces with layers. is it true ?
A7. The architecture dose not have a width that reduces with layers. There is also a fully connected layer between the symbolic layer (line 124-125), and the output dimension of the fully connected layer is always the input dimension (m+2n) required for symbolic operations in the next layer, so the width does not gradually decrease. The width depends on the symbolic operators in the next layer.
>**Q8.** What is the impact of lmin in (5) ? Do you have experiments to analyze it ?
A8. To constrain the minimal complexity of symbolic policies, we introduce the minimum L0 norm denoted as $l_{min}$ (line 187). We gradually reduce $l_{min}$ from the count of the original parameters w to a specified target value using a parabolic function (lines 195-196). As training progresses, this approach leads to a gradual reduction in the complexity of symbolic policies, resulting in a compact expression. The learning curve for the complexity of symbolic policies is presented in Appendix Figure 3. We intuitively set the value for the target $l_{min}$ and do not tune it.
>**Q9.** The metaRL objective given in the end of the first paragraph of page 6 is difficult to understand
A9. We clarify the metaRL in the global rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thanks to authors for their insightful answers.
However I still do not understand why it could not be applied to symbolic regression or classification tasks: why must be the problem be a sequential decision one to apply your method ?
For me this looks similar to researches on neural network architectures (such things like [1] for instance), but with special symbolic operators at each layer. These can be applied to supervised classification tasks...
I slightly improve my score (but also still think that presentation should be somehow reworked to better detail each component and give better intuition about the proposal. For instance more discussion about how are symbolic operators arranged as illustrated in fig 1 would be welcome, maybe giving ablations regarding this aspect of the contribution)
[1] Tom Veniat, Ludovic Denoyer:
Learning Time/Memory-Efficient Deep Architectures With Budgeted Super Networks. CVPR 2018: 3492-3500
---
Reply to Comment 1.1.1:
Title: Thanks for your reply!
Comment: Thank you very much for your positive feedback!
> **Q1.** However I still do not understand why it could not be applied to symbolic regression or classification tasks: why must be the problem be a sequential decision one to apply your method
A1. 1. Our framework is designed to efficiently learn symbolic policies, involving reinforcement learning processes and corresponding meta-RL designs tailored for sequential decision-making processes.
2. When talking about symbolic networks and probabilistic path selectors alone, they could be applicable to supervised learning, but it introduces additional challenges. For instance, classification tasks often entail high-dimensional inputs, whereas symbolic expressions typically require lower-dimensional inputs. This might necessitate integration with certain feature extraction methods, such as beta-VAE, and Slot Attention. Indeed, this presents an intriguing direction for future research.
> **Q2.** For me this looks similar to researches on neural network architectures (such things like [1] for instance), but with special symbolic operators at each layer. These can be applied to supervised classification tasks...
A2. Our approach is related to some network architecture search works. While we search for symbolic policies from the symbolic network, a series of network architecture search works construct hypernetworks and search for high-performing or computationally efficient network structures from these hypernetworks. However, our approach differs in terms of network construction, search algorithms, and the target task. We have provided a comparison with related works of NAS in Appendix E2.
> **Q3.** More discussion about how are symbolic operators arranged as illustrated in fig 1 would be welcome, maybe giving ablations regarding this aspect of the contribution.
A3. As described in lines 155-157, we heuristically involve more multiplication and division operators at shallow layers to provide more choice of input processed by simple operators for complex operations such as sines and cosines. We also provide an ablation study in Section 5.5. The results are presented in Table 5. Both the path selector and dense connections play a crucial role in the performance of learned symbolic policies. When the path selector is replaced with L1 or dense connections are removed, performance significantly deteriorates in certain environments. When employing dense connections and the path selector, but without symbolic arrangement (as shown in the middle structure of Figure 1), the approach can perform well in all environments. However, our heuristic symbolic arrangement further enhances the algorithm's performance. We also provide the ablation study results for meta-RL in Appendix F.
We will revise the representation for better clarity.
Thanks again for your time and effort invested in the review! | Summary: The paper proposes ESPL, a method for learning symbolic policies in environments with low-dimensional state spaces. ESPL uses a densely connected neural network structure (like DenseNet), where the activations in each layer are replaced with a hand-picked set of functions, such as multiplication, division, log and and exp. Second, ESPL uses the Gumbel-Softmax trick to learn a masking function with a minimal $L_0$ norm. This function ensures that the method learns compact symbolic policies by masking out redundant pathways in the neural network. ESPL is shown to be 100 to 1000 times more sample-efficient than a prior symbolic learning approach in single-task RL. The authors also propose a version that can be conditioned on task context for meta-RL.
Strengths: 1. The proposed method is well motivated and clearly explained. $L_0$ regularization of the parameters of Bernoulli random variables is an elegant approach to learning compact symbolic expressions.
2. In single-task RL, ESPL outperforms a prior symbolic learning approach $DSP^0$ with between 100 and 1000 fewer training samples.
3. A task-conditioned version of ESPL called CSP outperforms prior (non-symbolic) approaches in a meta-learning setup. The authors point out that the symbolic policies are much faster to run than the baseline neural net policies.
4. Source code is included.
Weaknesses: 1. The discovered symbolic policies in Table 1 seem to be somewhat more complex than the policies from $DSP^0$. It could be useful to additionally measure the complexity of the discovered policies for the various methods (if such measure exists).
2. It is not very clear how much effort went into hand-picking the activation functions in each layer of the network. Is it necessary to hand-design a different network for each experiment / environment?
Minor:
* Missing space before `(` and `[`. This is repeated at least 10 times throughout the paper, e.g. lines 33 and 34.
* Line 109: repeat what ESPL stands for.
* Line 210: mixing up $t$ and $T$.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Is Figure 1 right the exact architecture used in the experiments? Or do you hand-design a different arrangement of activations for each experiment?
2. Is it possible to scale symbolic regression to high-dimensional state spaces?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations of ESPL are not explicitly addressed. I am especially curious about the possibly of scaling up to high-dimensional (e.g. image) state spaces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive review, insightful feedback and constructive comments that help improve the quality of the paper! We are glad to answer your questions and would appreciate any further response.
> **Q1.** The discovered symbolic policies in Table 1 seem to be somewhat more complex than the policies from DSP. It could be useful to additionally measure the complexity of the discovered policies for the various methods (if such measure exists).
A1. Thanks for your advice.
Because there was no well-defined measure in the previous work, we compare the length of the symbolic policies and define:
$length=\\frac{\\sum^{i=n}_{i=1}N_o+N_c+N_v}{n}$, where n is the dimension of the action, $N_o$ is the number of operators, $N_c$ is the number of constant terms, $N$ is the number of variable terms. We give a comparison of the length of the symbolic policies in the following table.
| | Average | CartPole | MountainCar | Pendulum | InvDoublePend | InvPendSwingup | LunarLander | Hopper | BipedalWalker |
|---|---|---|---|---|---|---|---|---|---|
| ESPL | 12.91 | 3 | 6 | 7 | 15 | 13 | 16.5 | 24.6 | 17 |
| DSP | 8.25 | 3 | 4 | 8 | 1 | 19 | 6.5 | 12 | 12.5 |
In the benchmark environments used in the literature, in some environments ESPL produces longer symbolic policies than DSP, in others ESPL produces similar or shorter symbolic policies than DSP. In general, symbolic policies produced by ESPL are only slightly longer than DSP's, and this degree of difference has little effect for the symbolic policies (with similar interpretability in global rebuttal 2).
> **Q2.** It is not very clear how much effort went into hand-picking the activation functions in each layer of the network. Is it necessary to hand-design a different network for each experiment / environment?
A2. We did not hand-design a different network for each environment. We use the same symbolic network structure for all environments (as stated in lines 154-155), and use the same set of symbolic operators as the DSP for a fair comparison.
> **Q3.** Is Figure 1 right the exact architecture used in the experiments? Or do you hand-design a different arrangement of activations for each experiment?
A3. Yes, we use the same architecture shown in Figure 1 right for all environments. We don't need to hand-design a different arrangement of activations for each environment.
> **Q4.** The limitations of ESPL are not explicitly addressed. I am especially curious about the possibility of scaling up to high-dimensional (e.g. image) state spaces.
A4. ESPL is currently designed for continuous action space environments with state vector inputs, but not for environments with discrete action space or high-dimensional observation inputs. As described in Appendix G,
for tasks with high-dimensional observation like images, the proposed ESPL and CSP can not directly generate a symbolic policy. But we can employ a neural network to extract the environmental variables and generate symbolic policy based on these environmental variables. Methods such as beta-VAE [1] or Slot Attention [2] can be used to extract environmental variables from high-dimensional observations. We leave this in future work.
[1] Higgins, Irina, et al. "beta-vae: Learning basic visual concepts with a constrained variational framework." International conference on learning representations. 2016.
[2] Locatello, Francesco, et al. "Object-centric learning with slot attention." Advances in Neural Information Processing Systems 33 (2020): 11525-11538.
>**Q5.** Is it possible to scale symbolic regression to high-dimensional state spaces
Please refer to A4.
Thanks aging for your detailed review. We will revise the symbols, expressions and formatting errors in the paper in revision.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for answering my questions, I am in favor of accepting the paper. | Rebuttal 1:
Rebuttal: > **Theoretical analysis of cartpole.**
The dynamic of cartpole system can be defined as:
$\\ddot{x}=\\frac{8fa+2m \\sin \\theta(4L\\dot{\\theta}^2-3g\\cos \\theta)}{8M-3m \\cos2\\theta+5m}$
$\\ddot{\\theta}= \\frac{g \\sin \\theta- ( \\cos \\theta(fa+Lm \\dot{ \\theta}^2 \\sin \\theta))/(m+M)}{L( 4/3- (m \\cos^2\\theta)/(m+M))}$
Where $f$ is the coefficient of force, $a$ represents the action, $m$ is the weight of the pole, $M$ is the weight of the cart, $L$ is the half-pole length, $\\theta$ is the angle between the pole and the vertical direction, and $x$ denotes the horizontal coordinate of the cart.
Define $X=\\begin{bmatrix} x\\\\ \\dot{x}\\\\ \\theta\\\\ \\dot{\\theta}
\\end{bmatrix}$, then the derivative of $X$: $\\dot{X}=\\begin{bmatrix} \\dot{x}\\\\ \\ddot{x}\\\\ \\dot{\\theta}\\\\ \\ddot{\\theta}
\\end{bmatrix}=\\begin{bmatrix} \\dot{x}\\\\\\frac{8fa+2m\\sin\\theta(4L\\dot{\\theta}^2-3g\\cos \\theta)}{8M-3m\\cos2\\theta+5m}\\\\\\dot{\\theta}\\\\\\frac{g \\sin \\theta- ( \\cos \\theta(fa+Lm \\dot{ \\theta}^2 \\sin \\theta))/(m+M)}{L( 4/3- (m \\cos^2\\theta)/(m+M))}
\\end{bmatrix}$
According to the Hartman-Grobman theorem, the local stability of this nonlinear system near its equilibrium point is equivalent to the linearized system near the equilibrium point. For cartpole system, the equilibrium point is $[x,\\dot{x},\\theta,\\dot{\\theta}]=[0,0,0,0]$.
If $a = 0$, the system can be linearized as:
$\\dot{X}=\\begin{bmatrix}0&1&0&0\\\\0&0&\\frac{-6gm}{8M+2m}&0\\\\0&0&0&1\\\\0&0&\\frac{g}{L(4/3-M/(m+M))}&0\\end{bmatrix}\\begin{bmatrix} x\\\\\\dot{x}\\\\\\theta\\\\\\dot{\\theta}
\\end{bmatrix}$
Calculate its eigenvalues:
$[0, 0, 3.97114593, -3.97114593]$.
Due to the presence of positive eigenvalues, according to the Hartman-Grobman theorem, the system is unstable.
If $a=17.17\\theta+1.2\\dot{\\theta}$ which is learned by ESPL,
linearize the system near the equilibrium point:
$\\dot{X}=\\begin{bmatrix}0&1&0&0\\\\0&0&\\frac{137.36f-6gm}{8M+2m}&\\frac{9.6f}{8M+2m}\\\\0&0&0&1\\\\0&0&\\frac{g-17.17f/(m+M)}{L(4/3-M/(m+M))}&\\frac{-1.2f}{L(m+M)(4/3-M/(m+M))}\\end{bmatrix}\\begin{bmatrix} x\\\\\\dot{x}\\\\\\theta\\\\\\dot{\\theta}
\\end{bmatrix}$
Calculate its eigenvalues:
$[0+0.j, 0+0.j, -26.34+6.65014286j, -26.34-6.65014286j]$.
Since all the real parts of the eigenvalues are non-positive, according to the Hartman-Grobman theorem, the system is stable.
Therefore, for the CartPole environment, the policies learned through ESPL can maintain the stability of the CartPole system.
> **Human study for interpretability.**
The assessment of policy interpretability actually requires some understanding of the environments and what each state variable means. We invited ten researchers to rate the interpretability of policies, with a maximum of 20 minutes for each policy. On a five-point scale, we told them that the interpretability could be judged based on whether they could see from the policy expression what the policies was based on to make decisions and the possible corresponding relationships between actions and states. A score of five indicated that the strategy was highly interpretable and could be designed by humans, while a score of zero indicated that it was completely uninterpretable, just like the neural network policies. We measured the average score of the interpretability of the policies. The interpretability score obtained by ESPL and DSP are:
| | CartPole | MountainCar | Pendulum | InvDoublePend | InvPendSwingup | LunarLander | Hopper | BipedalWalker |
|---|---|---|---|---|---|---|---|---|
| ESPL | 5 | 5 | 4.5 | 4.1 | 4.3 | 3.9 | 2.6 | 3.1 |
| DSP | 5 | 5 | 4.5 | 5.0 | 4.2 | 4.0 | 3.1 | 3.2 |
Based on the results of the human-study, we found that when the symbolic expressions were relatively short, there was a consensus that the expression were highly interpretable. When the expression length became longer, the interpretability score decreased. It may be relatively difficult to
understand symbolic expressions in a short time, but it is generally believed that interpretability is much higher than black box, at least they can understand the action will be affected by which state, and the partial correlation.
> **Clarification of metaRL**
We use the same metaRL objective, training, and testing settings as in previous works[1][2]. The context refers to the trajectories collected from the environment, and the context encoder derives a context variable based on this context. The context variable can be understood as a kind of identifier for the environment or task. During the training process, the policy maximizes rewards on the corresponding task based on the context variable. Therefore, we can anticipate that in a new task, as long as trajectories are collected and the context variable is obtained, metaRL can exhibit strong performance on that task. We also provide the pseudocode for the training process in Appendix Algorithm 2. Furthermore, the testing process of meta-RL is similar to your description: during testing, for a new task, the agent first undergoes an exploration phase to collect trajectories and update the context variable (lines 213-214). In subsequent phases, metaRL can achieve favorable results on the task based on the context variable. We will revise the description for clarity in the revision. Due to space constraints and the well-defined nature of the context-based metaRL process in the literature, we only provided a brief overview in the paper. We will add a detailed introduction to metaRL in the appendix.
[1] Rakelly, Kate, et al. "Efficient off-policy meta-reinforcement learning via probabilistic context variables." International conference on machine learning. PMLR, 2019.
[2] Sarafian, Elad, Shai Keynan, and Sarit Kraus. "Recomposing the reinforcement learning building blocks with hypernetworks." International Conference on Machine Learning. PMLR, 2021 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask? | Accept (poster) | Summary: The authors perform a large-scale empirical analysis of models from the NAS-Bench-Macro benchmark which motivates to the development of (1) the node-path balancing principle and (2) Node-Path Balancing Pruner (NPB) -- a data-agnostic pruning-at-initialization (PAI) scheme. At a high level, the node-path balancing principle suggests that networks that have a good balance between the number of nodes and edges on a continuous/unbroken path from the neural network input to the neural network have a higher capacity for performance (i.e. classification accuracy). Their pruning algorithm, NPB, then solves a constrained optimization problem to strike this node-path balance for a given sparsity level. The driving factor for this research is to design advanced PAI algorithms for producing efficient models with better performance (i.e., classification accuracy) than existing PAI schemes.
**Main contributions:**
* Node-path balancing principle: Empirical analysis of NAS-Bench-Macro benchmark yields new perspective and principle for guiding design of pruning-at-initialization (PAI) schemes.
* NPB: A novel data-agnostic PAI scheme designed using the node-path balancing principle which outperforms existing PAI methods at lower FLOPs during inference.
**Post-rebuttal revision:**
Following the rebuttal, I feel that the authors have clarified my questions and concerns. Accordingly, I am increasing my rating to weak accept with the expectation that they will accommodate my requested minor revisions.
Strengths: * Large scale analysis of NAS-Bench-Macro benchmark models suggesting connection between model performance and balance of effective nodes and paths.
* **NPB pruning-at-initialization algorithm**: Produces results comparable to state-of-the-art PAI scheme PHEW with fewer FLOPs during inference.
Weaknesses: * **Looseness/inexactness of the node-path balancing principle**: I found the statement/rigor of this guiding principle (which was used to inform the design of NPB) to be very inexact. The motivating plots in Figure 4 suggest that there is nuance between this node-path balance and the resulting classification accuracy, but this was not further explored. Specifically, some networks with similar node-path balance in Figure 4 appear to have large variations in classification accuracy.
* **Lack of theory for node-path balancing principle**: Developing theory to study the principle would help design/inform a more exact guiding principle for PAI schemes.
* **Figure clarity**: It was not always evident what the takeaway is from certain figures and I feel that the presentation of certain figures could be improved. I found myself staring at some figures for a while trying to decipher what was important/meaningful as there is a lot of data. This isn’t to say there is not meaningful information in the figures but I feel like more careful curation/design of the figures could be enacted. For suggestions/specific figures, see comments under “Questions”.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How did you compute/measure the FLOPs presented in Table 1? I may have missed something, but I did not see an explanation in the main body or in the appendix.
2. Did you observe training/inference speedup due to reduced FLOPs? If so, it would be worth mentioning as NPB tends to require fewer FLOPs than other methods (at least at sparsity >= 90%).
3. In some instances, models appear to have very different balance of nodes and paths but comparable accuracy (e.g., ResNet20 on CIFAR-10 sparsity 99%, VGG19 on CIFAR-100 sparsity 90%) or similar balance of nodes and paths but very different accuracy (VGG19 on CIFAR-100 sparsity 68.38%, ResNet18 on Tiny-Imagenet sparsity 90%). How would your principle in its current form explain this?
Comments about figures:
* **Figure 3**: I had to stare at this for a while to figure out what the takeaway was. It seems that more effective nodes can help improve accuracy, but it is not enough on its own. I’m not sure if there is a different way to present the data to make it more readily apparent (and I’m not counting it against you) but I just wanted to mention it.
* **Figure 4**: I think the scale of the accuracy colormap should be the same across all 4 plots (i.e. darkest blue is min accuracy across all 4 plots ~65% and yellow is max across all 4 plots ~93%). Then accuracy diversity across the four groups would be more apparent (i.e. more params -> higher accuracy) and you could just have a single colorbar.
* **Figure 5**: For each dataset (i.e. row) I think the x- and y-axis range should be the same across the four plots. This revision should better highlight trends/changes in node-path balance with increasing sparsity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Repeating a weakness here as I believe it is a limitation that is not acknowledged/discussed:
* **Looseness/inexactness of the node-path balancing principle**: I found the statement/rigor of this guiding principle (which was used to inform the design of NPB) to be very inexact. The motivating plots in Figure 4 suggest that there is nuance between this node-path balance and the resulting classification accuracy, but this was not further explored. Specifically, some networks with similar node-path balance in Figure 4 appear to have large variations in classification accuracy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the time to review our work. We would like to address all weaknesses pointed out by you point-by-point below:
> Q1. Looseness/inexactness of the node-path balancing principle
>
A1: In Neural Architecture Search (NAS) context, it's important to consider that other aspects beyond node-path balance can contribute to final classification accuracy. In the case of NAS-Bench-Macro, networks with similar node-path balance but varying classification accuracies can be affected by other architectural configurations, such as differences in pooling layers, kernel sizes, and expansion ratios in the MobileNet-v2 block. These architectural variances often result in different numbers of parameters, influencing the overall network performance.
However, in pruning context, pruning methods focus on maintaining the same network structure while pruning connections within the network based on specific sparsities. Consequently, our experiments did not explore other architectural elements beyond node and path balance.
We emphasize that our experiments with NAS demonstrated that networks with balanced nodes and paths tend to exhibit superior performance. Based on these findings, we have taken the step to use the balance between two metrics as a fundamental criterion for designing pruning methods.
> Q2. Lack of theory for node-path balancing principle.
>
A2: We agree that our paper is not a theoretical study. However, the underlying motivations for our method are justified and easy to understand. Let us recall that focusing on optimization of either nodes or paths will lead to a loss on the remaining quantity, which leads to suboptimal performance. This statement is intuitive and we think that unnecessary theoretical analysis would add little to our paper at this stage. The key theoretical challenge here is the problem of identifying the balancing points and the precise relation between node-path and performance. These are difficult research problems in their own rights, and we postpone them to future work.
> Q3. How did you compute/measure the FLOPs presented in Table 1?
>
A3: We appreciate your inquiry regarding the computation of FLOPs presented in Table 1. To derive the FLOPs values, we adopted the methodology outlined in [1]. It's worth noting that the FLOPs calculation is performed on a layer-by-layer basis. At each layer, the FLOPs are computed by considering the number of parameters, the input size of the layer, and the layer's sparsity.
We will incorporate the explanation of the FLOPs measurement methodology in the revised version of the paper.
[1] https://github.com/Eric-mingjie/rethinking-network-pruning/blob/master/cifar/weight-level/count_flops.py
> Q4. Did you observe training/inference speedup due to reduced FLOPs? If so, it would be worth mentioning as NPB tends to require fewer FLOPs than other methods (at least at sparsity >= 90%).
>
A4: We appreciate your interest in the potential speedup resulting from reduced FLOPs in our proposed method. In the case of NPB, which employs unstructured pruning similar to other PaI methods, we currently provide theoretical FLOPs measurements rather than real-world speedup figures. This limitation arises from the constrained support available from off-the-shelf GPUs for unstructured pruning techniques.
However, it's important to note that while unstructured pruning may not exhibit real-time speedups presently, it remains a valuable mathematical framework and an empirical testbed for exploring novel Sparse Neural Network algorithms. Furthermore, there is a growing trend of improved practical support for unstructured pruning in the field [2].
[2] https://arxiv.org/abs/2302.02596
> Q5. In some instances, models appear to have very different balance of nodes and paths but comparable accuracy (e.g., ResNet20 on CIFAR-10 sparsity 99%, VGG19 on CIFAR-100 sparsity 90%) or similar balance of nodes and paths but very different accuracy (VGG19 on CIFAR-100 sparsity 68.38%, ResNet18 on Tiny-Imagenet sparsity 90%). How would your principle in its current form explain this?
>
A5: The disparities in accuracy, as you noted, can stem from various factors such as the number of effective parameters (as defined in Appendix B), the distribution of unpruned weights within kernels, and more. These intricacies can lead to differences in performance despite comparable or dissimilar node-path balances.
However, a general trend can still be discerned. We find that when the number of active neurons and input-output paths of a sparse network align within the balancing range, there tends to be a likelihood of achieving superior performance after training.
> Q6. Figure 3: I had to stare at this for a while to figure out what the takeaway was. It seems that more effective nodes can help improve accuracy, but it is not enough on its own.
>
A6: Thank you for your time and consideration. Indeed, Figure 3 shows more nodes can help enhance performance but it is not always the case. Apart from nodes, paths play an important role in the quality of subnetworks. These two metrics mutual impact on each other. Therefore, we conjecture that to create optimal subnetwork, we should take both nodes and paths into account.
> Q7. Figure 4: I think the scale of the accuracy colormap should be the same across all 4 plots. Then accuracy diversity across the four groups would be more apparent (i.e. more params -> higher accuracy) and you could just have a single colorbar.
>
A7: Thank you for your suggestions to improve our paper. We will revise the figure and update it in the next version.
> Q8. Figure 5: For each dataset (i.e. row) I think the x- and y-axis range should be the same across the four plots. This revision should better highlight trends/changes in node-path balance with increasing sparsity.
>
A8: Thank you for your suggestions to improve our paper. We will revise the figure and update it in the next version.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I would like to thank the authors for their responses to my review. I respond inline below.
> A1: In Neural Architecture Search ...
Thank you for clarifying. This insight resolves my concern with Figure 4 (i.e., the varying architectural configurations in NAS-Bench-Macro can account for variations in accuracy for similar node-path balance). For clarity, I recommend including a sentence noting this point somewhere in the final version of the paper (if there is not already a discussion on this in the paper that I missed).
> A2: We agree that our ...
Thanks for addressing this. I agree with your response and acknowledge that it is perhaps beyond the scope of this paper to establish theory for the relationship between node-path balance and performance.
> A3: We appreciate your inquiry ...
Thanks for the explanation and reference. I agree that this should be noted in the revised version of the paper.
> A4: We appreciate your interest in ...
Thank you for your clarification. I did not mean this comment to detract from your contribution but, rather, see if you could highlight it as a benefit of your approach. I agree that unstructured sparsity has its merits (as highlighted in question 3.5 of your reference).
> A5: The disparities in accuracy, as you noted, can stem from various factors such as the number of effective parameters (as defined in Appendix B), the distribution of unpruned weights within kernels, and more. These intricacies can lead to differences in performance despite comparable or dissimilar node-path balances.
However, a general trend can still be discerned. We find that when the number of active neurons and input-output paths of a sparse network align within the balancing range, there tends to be a likelihood of achieving superior performance after training.
Thank you for your response. I understand and acknowledge that many factors can play into the resulting accuracy. I recommend mentioning this somewhere (perhaps the discussion around Figure 5 or in the conclusion) as a limitation of the node-path balancing principle because, in your own words, “These intricacies can lead to differences in performance despite comparable or dissimilar node-path balances”. You could also mention that future/follow-up research could seek to better understand this nuance.
> A7&A8: Thank you for your suggestions to improve our paper. We will revise the figure and update it in the next version.
Thanks.
I feel that the authors have clarified my questions and concerns. Accordingly, I am increasing my rating to weak accept with the expectation that they will accommodate my requested minor revisions.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer ygEG
Comment: We sincerely appreciate your quick response and invaluable feedback. We will definitely incorporate your suggestions into the next version of the paper. | Summary: This paper examines Pruning at Initialization (PaI) methods using two novel metrics: the number of effective paths and the number of effective nodes. The authors find that layer reshuffling negatively impacts the performance of sparse neural networks obtained through PaI methods in the extreme sparsity regime. Based on the previous finding, authors present a novel data-agnostic PaI method, Node-Path Balancing Principle (NPB), which achieves the SOTA performance by effectively balancing the two proposed metrics.
Strengths:
- This paper provides a clear rationale for introducing the two proposed metrics; the authors empirically demonstrate the importance of the metrics via two experiments. First, experiments conducted on the NAS benchmark reveal a strong correlation between the two metrics and the final performance of sparse networks. Secondly, through layer shuffling experiments, the authors illustrate that simply increasing the number of effective nodes in the extreme sparsity regime is insufficient to prevent performance degradation due to the sharp decline in the number of effective paths.
- The proposed method demonstrates superior performance compared to the baselines, and the paper covers a fair amount of relevant previous studies.
- The paper is effectively structured and exhibits clear and concise writing
Weaknesses: - While many previous PaI methods prioritize the weight magnitude as the importance metric, the main motivation behind NPB focuses on the topology of the sparse network. Thus, the ablation study with respect to different weight initialization would strengthen the paper. Further, is NPB robust to *weight reinitialization*?
- While the layer shuffling experiments are well-justified, it is confined to a single configuration (CIFAR-10, ResNet20). I wonder if similar observations can be made under different experimental settings. Also, does layer-wise shuffled NPB exhibit similar behavior as to layer-wise shuffled PHEW?
- There are two points in need of clarification. First, in lines 332-333, the term "chunks" is unclear, and it is not apparent how the parallel computation can be effectively achieved. Second, in the case of convolutional layers mentioned in lines 261-263, it is unclear whether equations 3-5 are still applicable when the mask vector is not boolean.
- Although the authors claim that NPB generally outperforms PHEW, the standard deviation with respect to random seeds should be included in the paper for fair comparison. For instance, the performance gain depicted in Fig. 7 seems marginal. Additionally, in lines 694-695, it is stated that NPB consistently outperforms the baselines regardless of the hyperparameter configuration. However, it should be noted that certain choices may lead to worse performance compared to the baselines (refer to Table 1).
I am willing to raise my score if above concerns are well addressed during rebuttal.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In equation 3, what is the purpose of “min” operation? Or is it just a typo of $\min(m_{ij}^l - 1, 0)$?
- What kind of optimizer is implemented in the code as to approximating the original integer programming formulation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper presents a future research direction in lines 706-707, which is included in the Appendix. However, it is equally important for the authors to acknowledge and discuss the limitations of their proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the time to review our work. We would like to address all weaknesses pointed out by you point-by-point below:
> Q1. While many previous PaI methods prioritize the weight magnitude as the importance metric, the main motivation behind NPB focuses on the topology of the sparse network. Thus, the ablation study with respect to different weight initialization would strengthen the paper. Further, is NPB robust to weight reinitialization?
>
A1: We appreciate your suggestion for enhancing our paper. In response, we conducted experiments with different weight initializations for NPB on setting ResNet20 on CIFAR-10 and visualized it in Figure 2 in attached file. In our paper, we use Kaiming Normal initialize network weight. The result shows that different weight initializations only affect the performance of subnetworks. While the number of effective nodes remains relatively consistent across different initializations, a minor variation can be observed in the number of effective paths between subnetworks. This discrepancy is attributed to the distribution of remaining weights within the convolutional layers' kernels. This experiment demonstrates that NPB exhibits robustness to different weight reinitializations.
> Q2. While the layer shuffling experiments are well-justified, it is confined to a single configuration (CIFAR-10, ResNet20). I wonder if similar observations can be made under different experimental settings.
>
A2: We have run additional layer-wise shuffling experiments with VGG19 on CIFAR-10 with SynFlow and SNIP. We visualize the results on Figure 3 in the attached file. In general, this experiment shows the same behavior with which in setting ResNet20 on CIFAR-10.
> Q3. Also, does layer-wise shuffled NPB exhibit similar behavior as to layer-wise shuffled PHEW?
>
A3: We have run shuffling experiments for NPB with setting ResNet20 on CIFAR-10 and reported in Table 1 in attached file. After shuffling subnetworks found by NPB, the subnetworks' width broaden while the figure for paths drops significantly as the level of sparsity increases. However, similar to PHEW, the performance of unmodified subnetworks is better than the shuffled counterparts.
> Q4. There are two points in need of clarification. First, in lines 332-333, the term "chunks" is unclear, and it is not apparent how the parallel computation can be effectively achieved. Second, in the case of convolutional layers mentioned in lines 261-263, it is unclear whether equations 3-5 are still applicable when the mask vector is not boolean.
>
A4: To be more specific, we consider layer $l$ with mask $\mathbf{m}^{(l)} \in \mathbb{R}^{h^{(l)} \times h^{(l+1)}}$ in which $h^{(l)}$ and $h^{(l+1)}$ are the number of nodes in layer $l$ and $l+1$. We divide $h^{(l+1)}$ nodes into $K$ equal chunks.
Instead of directly solving for $\mathbf{m}^{(l)}$, we solve $K$ problems $[\mathbf{m}^{(l)}_1, \mathbf{m}^{(l)}_2, ..., \mathbf{m}^{(l)}_K ]$ where $ \mathbf{m}^{(l)}_k \in \mathbb{R}^{h^{(l)} \times h^{(l+1)}_k}$.
With convolutional layers, we consider a kernel as a connection in linear layer (please refer to Figure 1). The mask $\mathbf{M} \in \mathbb{R}^{c_{in}, c_{out}, h, w}$ with binary entries will transform to $\mathbf{M}’ \in \mathbb{R}^{c_{in}, c_{out}}$ with an entry $m_{i,j} \in \mathbb{N}$ with values from 0 to hw. Then, we treat M as an integer variable, and we leverage an available optimization library (CVXPY library) to solve the mixed integer programming problem.
> Q5. Although the authors claim that NPB generally outperforms PHEW, the standard deviation with respect to random seeds should be included in the paper for fair comparison. For instance, the performance gain depicted in Fig. 7 seems marginal. Additionally, in lines 694-695, it is stated that NPB consistently outperforms the baselines regardless of the hyperparameter configuration. However, it should be noted that certain choices may lead to worse performance compared to the baselines (refer to Table 1).
>
A5: We appreciate your feedback. We will incorporate standard deviation in the next version. Regarding the statement in lines 694-695, we appreciate your observation. We intend to clarify our claim to accurately reflect that NPB generally outperforms the baselines, and while certain hyperparameter configurations might lead to slightly reduced performance compared to PHEW. Overall, we believe that our findings still highlight the competitiveness and effectiveness of NPB.
> Q6. In equation 3, what is the purpose of “min” operation? Or is it just a typo of $min(m_{i,j}^l - 1, 0)$
>
A6: The "min" operation in Eq. 3 serves the purpose of transforming the formulation to a version that is compatible with convex optimization library. We acknowledge the error, it indeed should be $min(m_{i,j}^l - 1, 0)$. We will rectify it in the next version.
> Q7. What kind of optimizer is implemented in the code as to approximating the original integer programming formulation?
>
A7: The optimization problem is solved efficiently via the available convex optimization library (CVXPY library) in which we use the default solver for mixed integer programming named SCIPY [1].
[1] https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options
> Q8. It is equally important for the authors to acknowledge and discuss the limitations of their proposed method.
>
A8: We appreciate your feedback. We would like to acknowledge the following limitations of our approach:
- Global Optimization of Node-Path Balancing: Our method has not achieved global optimization of node-path balancing. The intricate relationship between effective nodes and paths presents a complex optimization challenge that remains a subject of ongoing exploration.
- Discrete Optimization: The current form of our method involves discrete optimization. This can potentially limit applications such as applying NPB as a criterion in prune during training, or in Neural Architecture Search problems.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their detailed response and the supplementary experiments to confirm the effectiveness of NPB. That said, I am now confused with the author's following response regarding Table 1 in the pdf.
> A3: We have run shuffling experiments for NPB with setting ResNet20 on CIFAR-10 and reported in Table 1 in attached file. After shuffling subnetworks found by NPB, the subnetworks' width broaden while the figure for paths drops significantly as the level of sparsity increases. However, similar to PHEW, the performance of unmodified subnetworks is better than the shuffled counterparts.
I believe layer-wise shuffling experiment is a critical motivating experiment behind NPB along with NAS experiment. To my understanding, the shuffled versions of SNIP and SynFlow in Figure 3 (main paper): (1) occasionally outperforms original one due to increased number of effective nodes under 99% sparsity - *importance of effective nodes*; (ii) hurts performance due to insufficient number of effective paths at extreme sparsities (i.e., > 99%) - *importance of effective paths*. However, Table 1 (in pdf) shows shuffling NPB leads to an increase in the number of effective nodes and a decrease in the number of effective paths, while I expected reduction in both measurements. Further, authors point out the performance of unshuffled network is better than the shuffled counterparts, but I do not find this explanation satisfactory because layer-shuffling consistently hurts performance of PHEW, and SNIP/SynFlow at extreme sparsities as well. From my perspective, Table 1 appears to emphasizes the significance of effective paths alone rather than achieving a *balance* between nodes and paths. Additionally, is there any particular reason as to presenting the NPB results in terms of number of effective nodes and paths rather than the previously employed ratio?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer HuF1
Comment: Many thanks for your further comments. We are sorry that our response still contains confusions. We hope the following explanation will clarify these confusions:
Regarding results presented in Table 1 (extra pdf):
1. Note that a possible explanation to why shuffling after NPB produces those results in Table 1 (extra pdf) is that the optimization method used in NBP eventually reaches a *local optimum for node-path balancing*. That is, applying shuffling after our NPB means that we are moving away from that local optimum. In particular, while shuffling may improve the representation capacity (by increasing the number of effective nodes), it substantially decreases number of paths compared to that of NPB, as seen in that table, which limits the information flow. As a result, the performance of shuffled subnetwork drops (as seen in sparsity levels ~96% and 99%). Therefore, Table 1 (extra pdf) is just an evidence that shuffling moves us away from that local optimum, which eventually also results in a performance decrease.
2. We would like to highlight that the main claim of our paper is that NPB is well calibrated to achieve the effective node-path balance. As discussed in Point 1, Table 1 (extra pdf) indicates that layer-wise shuffling may have negative impact on such balance, leading to deteriorated performance. Please note that here we argue that it is the node-path balance that matters, rather than the absolute numbers of effective nodes and paths. Therefore, we observed layer-wise shuffling may either reduce the number of effective paths with the same number of effective nodes (non-extreme sparsity cases), or occasionally increase the number of effective nodes but decrease the number of effective paths (since shuffling uniformly redistributes edges to all node).
3. Now, you are correct that only based on the results in Table 1 (extra pdf) we may interpret that just increasing the number of effective paths may improve the performance. However, this conclusion is not correct in general, as in the main paper we have already showed that increasing the number of effective paths alone is not enough (please refer to Figure 5 in our paper): For instance, subnetworks generated by SynFlow have much higher number of effective paths and fewer effective nodes, but with lower accuracy, compared to NPB. Therefore, results in Table 1 should not be interpreted alone, which actually complements our main findings in the paper and further support the argument that NPB is better calibrated to achieving the desired node-path balance.
4. Overall, we should not read Table 1’s results alone and draw final conclusions from it. Those results are just auxiliary ones to further support our claim that NPB is better calibrated to achieving the desired balance.
Regarding using numbers of effective nodes and paths instead of ratio:
Many thanks for this comment. Here we used the numbers rather than ratio just to better visualise the differences between competing methods. We can provide both versions (number and ratio) in appendix for the sake of consistency. | Summary: This paper posits that the performance of (neural network) pruning at Initialization methods depends on a balance between effective nodes and paths. With this framework, authors explain why randomly shuffled subnetworks are sometimes more effective than subnetworks found by pruning at initialization methods. Finally, using the Node-Path balancing principle, the authors propose a pruning at initialization scheme by solving a linear program that optimizes pruning mask to balance effective path and nods, which outperforms other pruning at initialization methods.
Empirical experiments are done on image datasets (CIFAR and TinyImagenet) with VGG and Resnet-20 models. Further empirical evidence for the node-path balancing principle is demonstrated through NAS benchmarks.
Strengths: - The paper is well-written, and generally content is well-organized.
- The proposed node-path balancing principle is used to explain an existing phenomenon (random shuffling) as well as present a new pruning at initialization scheme, which is quite interesting.
- The convex program-based pruning method (NPB) outperforms other pruning at initialization methods and requires fewer FLOPs. PHEW is close in performance to NPB. However, the node-path balancing framework can also explain that, is interesting.
Overall, the paper does a good job of introducing the node-path balancing principle and provides several empirical evidence in its support.
Weaknesses:
- The NAS experiments are interesting, but it wasn't clear to me to connect them with pruning at initialization or data-agnostic pruning, as NAS experiments are data-dependent.
- The term balance may be vaguely used.
- The proposed principle states that at a particular sparsity, the best-performing subnetwork has to strike a balance between the effective nodes and paths. However, it does not predict the right balance.
- It would be interesting to see if this balance depends on sparsity, model, and dataset or model and datasets only. Could the best-performing subnetwork network be maximizing average effective paths per node?
- The experiments are limited to image datasets and two neural network architectures. However, I appreciate the authors reporting results over three random seeds though.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Please see the weaknesses section.
- Shouldn't Eq 4/5 consider the mask M as binary? As such, M is a real value; how is the solution converted to a pruning mask?
- Line 262: Should M vary from 0 to hw instead of kw for convolution masks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: None, that are not discussed or addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing that our proposed method is technically sound and yielding commendable empirical results. We would like to address all the weaknesses pointed out by you point-by-point below:
> Q1. The NAS experiments are interesting, but it wasn't clear to me to connect them with pruning at initialization or data-agnostic pruning, as NAS experiments are data-dependent.
>
A1: The intuition behind NAS experiments is that we would like to show that there is a strong correlation between the two metrics (node, path) and the final performance of network candidates. In particular, networks which are designed to have higher node and path tend to superior performance after training. We link this problem with sparse network configuration design.
> Q2. The proposed principle states that at a particular sparsity, the best-performing subnetwork has to strike a balance between the effective nodes and paths. However, it does not predict the right balance.
>
A2: Precisely pinpointing the precise balancing point is indeed a complex challenge. In our research, we initiate by identifying these balancing regions. With the aid of a straightforward proxy method, we endeavor to address this issue, achieving notable empirical balancing and superior performance outcomes. The intricacies of achieving the ideal balance necessitate ongoing exploration, and we believe our work serves as a stepping stone towards this pursuit.
> Q3. It would be interesting to see if this balance depends on sparsity, model, and dataset or model and datasets only. Could the best-performing subnetwork network be maximizing average effective paths per node?
>
A3: Indeed, the interplay of balance can be influenced by factors such as sparsity, model architecture, and dataset characteristics. Although explicitly pinpointing this balance is intricate and non-trivial, a core contribution of our work is the introduction of an approximation method (NPB) that empirically navigates subnetworks within these balancing regions across diverse sparsities and model types.
Maximizing average effective paths per node would be an interesting direction. Indeed, a simplistic approach could lead to uniformly distributed connections, resembling the outcomes of Random pruning. To truly achieve a meaningful and effective balance, we believe a more sophisticated strategy is likely necessary.
> Q4. The experiments are limited to image datasets and two neural network architectures. However, I appreciate the authors reporting results over three random seeds though.
>
A4: Thank you for your suggestions. We would like to clarify that ResNet20 is another version of ResNet which is defined for CIFAR-10/100 tasks. These CIFAR versions are significantly lower in the number of parameters compared to the ImageNet version (ResNet18) and are usually used in pruning literature. Regarding to more network architecture, we have conducted more experiments on two other ResNet versions which are ResNet32 and Wide-ResNet32 on CIFAR-10. These two versions are deeper and wider than ResNet20. We visualize results in Figure 4 in attached file.
> Q5. Shouldn't Eq 4/5 consider the mask M as binary? As such, M is a real value; how is the solution converted to a pruning mask?
>
A5: We treat M as an integer variable. The optimization problem is solved efficiently via the available convex optimization library (CVXPY library) in which we use the default solver for mixed integer programming named SCIPY [1].
[1] https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options
> Q6. Line 262: Should M vary from 0 to hw instead of kw for convolution masks?
>
A6: Thank you for addressing the error, indeed, M should vary from 0 to hw. We will edit this typo in the next version.
---
Rebuttal Comment 1.1:
Comment: Thanks. Most of my comments are addressed.
A1. Thanks, I understand the relevance of NAS benchmarks better now. I would like to highlight that NAS is a data-dependent architecture search and dense, whereas in your case, the search is data-independent and sparse. This could create confusion for the reader --- using this as motivation is ok, but it may be worth addressing this intricate difference in text.
A3. While optimizing a ratio may be difficult, it would be nice to visualize the average path per node as a post-hoc analysis to support the argument for optimizing both numbers jointly. And serve as another measure contributing to a good sparsification mask, even though it is harder to optimize. I felt that, given the title of the paper, it appears fitting.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer qyia
Comment: We sincerely appreciate your thoughtful and valuable feedbacks. We will certainly integrate your suggestions in the next version of our paper. | Summary: Given the numerous pruning-at-initialization (PaI) methods, the performance of them are still far from satisfactory compared to the post-training pruning methods. In this work, the authors provide a novel perspective to understand the relationship between the performance and the architecture of the subnetworks generated by PaI methods. In particular, this work studies the topology of the subnetwork and observed a positive correlation between the PaI performance and the balance between the effective nodes and effective paths. Further, the authors propose a new PaI method, named Node-Path Balancing Pruner (NPB) to explicitly balance effective nodes and paths. The empirical results demonstrate the effectiveness of the proposed method.
Strengths: 1. This paper studies the PaI methods from a novel perspective of model topology. This is a very interesting but also challenging perspective.
2. The figures and graphs in this paper are very delicately plotted and are of high quality. They help the readers better understand the methods proposed.
3. This work made enough literature review, which has covered the most important PaI literature.
4. This work is written in a very coherent manner, which shows clear the motivation, the method, the rationale behind the method, as well as the logic of the empirical studies.
5. The empirical results of the proposed method are indeed very impressive.
Weaknesses: I listed several weaknesses below from different perspectives. I will consider to raise my score if they are properly addressed.
* [Method] Based on my understanding of the NAS observation, the reason for balancing the effective nodes and the effective paths is to maximize the usage of the limited parameter quota. However, if this is true, wouldn't directly cutting down the width of the network a perfect option for NPB? Correct me if I missed anything.
* [Method] Mathematically, the formulation of the optimization objective in Eq. (0) (between Line 243 and Line 244) and Eq. (4~5) are not standard. The to-be-optimized variables should be clearly stated beneath the maximization symbol.
* [Method] It is not very clear to me, how the optimization is carried out through the objective Eq. (4~5). Are there back-propagations involved? It seems not.
* [Experiments] For Fig. 5, the information conveyed is not as clear as I expected. Probably this is because the performance of each method in each setting is annotated with numbers. I would suggest the authors to plot a normal figure showing the performance change (y-axis) vs. the sparsity level (x-axis) as many pruning paper does (there are too many and thus I spare the references here). It can help the readers compare the final accuracy of different methods if they do not care the intermediate results too much (e.g. the effective paths/nodes).
* [Experiments] In the experiments, the $\alpha$ is set to $0.01$. However, the word "balanced" claimed in the abstract seems a bit deceptive. To me, the value of $\alpha$ is very crucial in the proposed method, but the authors fail to demonstrate its role through some ablation studies. I would suggest the authors either explain their choice of $\alpha$ and/or conduct some ablation studies on $\alpha$.
* [Experiments] In Fig. 5, will it be better to add a reference line such as $y = \alpha x$ to help judge the balance of the nodes/paths? Correct me if I missed anything.
* [Experiments] This is not a must but I think the baselines compared in this paper are a bit old. Some methods like ProsPr (already cited in the paper) are recommended to be compared. The lack of this result will not change my evaluation on this paper.
* [Minor] Rephrase: "An effective node/channel is one that at least one effective path goes through it" ==> "A node/channel is effective if at least one effective path goes through it".
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. It would help the readers to understand the details of the motivation studies if the authors could explain how the "effective sparse ratio" is calculated. Is it done by traversing? Or is there any effective algorithm to calculate that automatically?
2. I am very surprised by the results of the motivation studies. Thus, I am curious if the same rules can be applied to post-training pruning?
3. In the algorithm of NPB, the layer-wise sparsity is obtained through ERK. Is there a particular reason for this choice? Will other methods also be applicable?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I do not have additional comments on the limitation of this work. Please refer to the "Weaknesses" and "Questions" sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the time to review our work and are glad that you find our work is well-written with sufficient experiments. We would like to address all the weaknesses pointed out by you point-by-point below:
> Q1. Wouldn't directly cutting down the width of the network a perfect option for NPB?
>
A1: We appreciate your engagement with our work. While it's true that balancing effective nodes and paths aims to maximize the utilization of a limited parameter budget, directly cutting down the network's width might not be the ideal solution for pruning tasks.
Drastically reducing the network's width can lead to a significant drop in representation capacity, potentially causing a performance decline. For example, Synflow, which reduces width to some extent, often sees performance improvements after reshuffling subnetworks. This effect suggests that simply narrowing the network might not fully capture the intricate trade-offs between nodes and paths.
The balancing of effective nodes and paths is about finding the right equilibrium to maintain an adequate representation capacity while also ensuring information flow. Our proposed approach acknowledges these complexities and seeks to strike a more refined balance to achieve competitive performance within parameter constraints.
> Q2. The formulation of the optimization objective in Eq. 0,4,5 are not standard.
>
A2: We appreciate you pointing out our mistake. We will review and revise the formulations in the updated version.
> Q3. It is not very clear to me, how the optimization is carried out through the objective Eq. (4~5). Are there back-propagations involved? It seems not.
>
A3: Solving node-path balancing objective globally over the whole neural networks seems to be a hard problem, one can conjecture that it is even NP-hard. However, one of our main contributions is to sidestep solving this hard problem by solving a sequence of easy problems to obtain good approximated solutions. In particular, we propose an approximation for solving this problem by doing layer by layer through convex optimization. The approximation problem is solved efficiently via the available convex optimization library (CVXPY library) in which we use the default solver for mixed integer programming named SCIPY [1].
[1] https://www.cvxpy.org/tutorial/advanced/index.html#setting-solver-options
> Q4. I would suggest the authors to plot a normal figure showing the performance change (y-axis) vs. the sparsity level (x-axis) as many pruning paper does.
>
A4: We have already visualized the figure like your suggestion in Appendix F. Please refer to the Appendix F in supplementary for more details.
> Q5. The choice of $\alpha$ and/or conduct some ablation studies on $\alpha$.
>
A5: It's important to mention that optimizing nodes is relatively simpler compared to optimizing paths (we can simply assign parallel edges connecting nodes in the current layer to nodes in the next layer); hence, we deliberately choose small values of alpha, which can be considered as prior knowledge. We set the small alpha ($\alpha = 0.01$) to all settings to ensure fair comparison with other baselines. In Appendix G, we have presented an ablation study on $\alpha$ and $\beta$.
> Q6. In Fig. 5, will it be better to add a reference line such as $y=ax$ to help judge the balance of the nodes/paths?
>
A6: It is challenging to identify the specific balancing line. Balancing between node and path is indeed a complex endeavor and non-trivial due to their mutual impact on each other. We propose a practical proxy method to address the complex balancing issue which simply offers strong empirical balancing results across a range of experiments. Our goal with the proxy method is to offer a practical solution that strikes a balance while being easily applicable.
> Q7. Comparison with recent baselines such as ProsPr
>
A7: We appreciate your recommendation and understand your perspective on including more recent baselines. While ProsPr belongs to PaI approach, it's worth noting that ProsPr employs multiple gradient descent steps to identify important weights. Thus, we believe that ProsPr has certain advantages over PaI methods like our NPB (which does not use information from the dataset).
> Q8. [Minor] Rephrase: "An effective node/channel is one that at least one effective path goes through it" ==> "A node/channel is effective if at least one effective path goes through it"
>
A8: Thank you for your suggestions, we will edit it in the next version.
> Q9. How the "effective sparse ratio" is calculated.
>
A9: We follow the algorithm in Appendix M of [2], The effective sparse ratio = #Effective parameters / #Unpruned parameters
We determine effective parameters as follow:
1. Set each weight in the network to 1 if it is unpruned or 0 if it is pruned.
2. Forward-propagate a single example comprising all 1’s.
3. Compute the sum of the logits.
4. Compute the gradients with respect to this sum.
5. Prune any unpruned weight with a gradient of 0. Since these weights did not receive any gradient, they are disconnected from the output of the network.
[2] https://arxiv.org/abs/2009.08576
> Q10. I am curious if the same rules can be applied to post-training pruning?
>
A10: We consider LTH as a representative method for post-training pruning approach. Through the additional experiments shown in Figure 1 in the attached file, we observe that LTH’s subnetworks are also in the same rules. The gap in performance between LTH and PaI methods is due to LTH leveraged information from the dataset and model training to produce subnetworks. Overall, we strongly believe these results align with our principle and further support our principle as a necessary condition for good PaI.
> Q11. Why choose ERK to obtain layer-wise sparsity? Will other methods also be applicable?
>
A11: We have mentioned this ablation study in Appendix G. Due to limited 6000 words, we refer reviewer to Appendix G in the supplementary for more details.
---
Rebuttal Comment 1.1:
Title: Thank you for your response.
Comment: Dear authors,
Thank you for your response. I have carefully read your response and my concerns have mostly been alleviated. Therefore, I decide to keep my original rating of 6 and will vote for acceptance if AC asks. Thank you again for your hard work.
Best,
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 8roE
Comment: We would like to thank you again for your valuable feedbacks and insights. We will certainly integrate your suggestions in the next version of our paper. | Rebuttal 1:
Rebuttal: Thank you for your valuable and constructive feedbacks. We have performed the additional experiments as requested by the reviewers and have provided the results in this pdf file.
We hope that our responses address your concern. If you have any additional questions, uncertainties, or areas you would like us to elaborate on, we are happy to engage in a continued discussion and provide any additional information you may require.
Pdf: /pdf/66db1f7121a8a910cd90dad38b9e7055c78328e8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors propose a Pruning at Initialization (PaI) method that considers the balance between the number of effective nodes and effective paths. This design principle is based on the observations on the NAS benchmark as well as layer-wise reshuffling. The pruning problem is nicely formulated as a multi-objective optimization problem. The experiment results show that the proposed method NPB outperforms the state-of-the-art method PHEW in some configurations.
Strengths: * A new Pruning at Initialization (PaI) method is developed considering both the number of effective nodes and the number of effective paths.
* A lot of analyses and experiments have been conducted to show the superiority and motivation of the proposed method.
* The pruning problem is formally transformed into a multi-objective optimization problem.
* In some settings, the proposed method NPB is shown to outperform the state-of-the-art method PHEW.
Weaknesses: The motivation and technical details are not very clear in several places, as follows.
**Overall:**
* The sweet spot of the proposed framework seems to be the extreme sparsity regime (> 99%). I am really wondering if we need to prune a network to the extreme. My impression is that the sweet spot is a corner case.
**Section 3.3:**
* Line 187: Why does reshuffling make subnetworks wider?
* Line 205: How is the hypothesis derived from reshuffling? PaI actually does *not* involve reshuffling.
**Section 4.1:**
* Figure 4 (and Figure 1): It seems that, in typical architectures, the number of effective nodes and the number of effective paths are strongly correlated. Then, why do we need to balance these two metrics?
* Line 223: Along the same lines, the strong correlation between the two metrics indicates that one of them is redundant. Thus, the overall claim cannot be supported by the observation.
* Figure 4 (and Figure 1): What's the meaning of each value in the x- or y-axis? For example, what do you mean by $9.0$ in the x-axis?
**Section 4.2:**
* Due to the strong correlation between the number of effective nodes and the number of effective paths, it may be unnatural to produce subnetworks with too many effective paths (or nodes) and too few effective nodes (or paths).
**Section 4.3:**
* Even though it is reasonable to balance these two metrics, their ranges significantly differ. Actually, they have completely different scales (natures). This *incompatibility* between the two metrics would explain a very small value of $\alpha$=0.01. Syntactically, Equation (4) does not achieve the balance, but mainly considers only $f_p$. Also, $\alpha=0.01$ is indeed arbitrary and does not provide any insight on the optimal balance.
* Appendix G (Ablation Study): Figure 8 shows that the accuracy is not that sensitive to $\alpha$. I am still very confused why the balance between the two metrics is indeed important. Figure 8 directly shows that the optimal balance may not be important.
**Section 5.2:**
* It is not clear how the sparsity levels were chosen for the datasets in Figure 5. That is, how are the 12 settings chosen? I believe that a wider range of sparsity levels should be analyzed.
* In contrast to Figure 4 (and Figure 1), it is not clear why most of the circles are not placed on the diagonal.
* Line 316: Many cases of NPB do not lie in the so-called balancing regions. It would be better to mark the balancing regions in the figures.
Overall, I believe the proposed framework is interesting and has some potential. However, the main contribution is neither well motivated nor rigorously presented. It seems that my concerns may not be sufficiently addressed by the rebuttal process. Therefore, I would like to give my rating as a reject.
--
**After Rebuttal**
Some of the concerns and questions are resolved by the authors' rebuttal. However, the observational motivation and the need for careful optimization are not convincingly presented. Thus, I am a little reluctant to give a high score and would like to increase my rating to 5.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the weak points mentioned above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors did not discuss the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time to review our work. We would like to address all the weaknesses pointed out by you point-by-point below:
> The sweet spot of the proposed framework seems to be the extreme sparsity regime (> 99%)
>
A1: We respectfully disagree with the reviewer. Our experiments have been conducted across a range of sparsity levels, all of which are lower than 99%. The results demonstrate superior performance compared to baseline methods.
> Why does reshuffling make subnetworks wider?
>
A2: Iterative pruning prioritizes high-scoring nodes, which are typically determined by gradient/weight information. When nodes have edges pruned, their scores tend to decrease, making them more susceptible to further pruning. This can lead to slender subnetworks (eg., SynFlow, Iter-SNIP). When shuffling layer-wise, connections are redistributed uniformly, widening the subnetwork. Similar findings have been illustrated in [1].
[1] https://arxiv.org/abs/2009.08576
> How is the hypothesis derived from reshuffling? PaI actually actually does *not* involve reshuffling.
>
A3: In general, PaI approaches do not involve reshuffling. However, we use reshuffling to emphasize the balance between effective nodes and paths. Good sparse NNs should have enough effective nodes to ensure good representation capacity and enough effective paths to guarantee good signal propagation during training. With limited number of parameters, shuffling can intuitively widen subnetworks, but it can decrease the number of effective paths, which can impede information flow (good signal path can be destroyed after shuffling). The empirical results perfectly align with our intuition: While shuffling can sometimes boost performance, accuracy drops as sparsity increases. Stemming from the above observation, we propose to balance two metrics, enhancing optimal sparse NNs design.
> Figures 1, 4: in typical architectures, the number of effective nodes and paths are strongly correlated, which indicates that one of them is redundant.
>
A4: We disagree with the reviewer for two reasons:
- In NAS, networks are dense. Correlations between nodes and paths exist depending on specific design aspects like the number of channels. However, components like skip-connect or pooling layers may affect either nodes or paths. Different block configurations (Appendix H) results in variable node and path counts.
- Node-path relationship is totally different in pruning. All subnetworks are pruned from the same dense network, which have a strong constraint of the fixed number of connections. The relationship is now based on the remaining connections. Therefore, the way we prune the network can significantly affect this relationship.
> Then, why do we need to balance these two metrics? Thus, the overall claim cannot be supported by the observation.
>
A5: It's important to emphasize that in typical architectures, nodes and paths are related and used to monitor the network's performance.
A notable node-path balancing scenario is preventing uniformly distributing connections to nodes to enhance the representation capacity (Random pruning), which implicitly limits information flow.
Reversibly, the node-path balancing also prevents pruning out most of the channels but greatly keeps the connections between nodes (SynFlow), which makes networks have a poor number of linear-regions to distinguish features.
The node-path mechanism aids us in identifying the optimal subnetwork that is determined by both metrics. This approach results in superior performance compared to the use of a single metric as we have shown in our paper.
> Figures 1, 4: What's the meaning of each value in the x- or y-axis?
>
A6: The x- and y-axis are the number of effective nodes (in ln scale) and paths (in log scale). We mentioned these numbers in lines 284-286.
> Due to the strong correlation between nodes and paths, it may be unnatural to produce subnetworks with too many paths (or nodes) and too few nodes (or paths).
>
A7: In pruning as explained in A4, it is natural that we can make the network slender with a many paths (like Synflow) or broader with fewer paths (like PHEW) as shown in Figure 5.
> Different scales of metrics in Eq-4. $\alpha=0.01$ is indeed arbitrary and does not provide any insight on the optimal balance.
>
A8: Due to limited space, we refer Reviewer to lines 269-274 for details.
> Appendix G, Figure 8 shows that the accuracy is not that sensitive to $\alpha$ and optimal balance may not be important.
>
A9: In Figure 8, with low sparsity levels (<90%), optimizing node is easy because there are many connections, and differences in $\alpha$ only affect the number of paths. However, as sparsity increases and $\alpha$ varies, the limited number of weight directly impacts the number of nodes in subnetworks. By fixing $\beta$ and varying $\alpha$, subnetworks’ performance changes accordingly. Thus, identifying an optimal balance point is crucial for pruning neural networks.
> How sparsity levels were chosen? Wider range of sparsity should be analyzed
>
A10: The choice of sparsities was guided by the compression rate r which is computed as follows #Remaining_weight / #All_weight = 10^(-r) [2]. Sparsity levels are derived from $r\in$ {0.5, 1, 1.5, 2}. While wider sparse range is important, we believe that the chosen sparsity settings are common and effectively demonstrate the efficiency of our proposed method.
[2] https://arxiv.org/abs/2006.05467
> In contrast to Figures 1, 4, it is not clear why most of the circles are not placed on the diagonal.
>
A11: Please refer to A4 and A7.
> Many cases of NPB do not lie in the so-called balancing regions. It would be better to mark the balancing regions in the figures.
>
A12: We recognize the need to clarify effective nodes and paths' balancing regions. Balancing these factors is complex due to their mutual impact. Our proposed proxy method addresses this challenge, yielding strong empirical balancing outcomes across various experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you very muuch for the authors' detailed responses. Some of my concerns and questions are resolved. However, the observational motivation and the need for careful optimization are not still clear. Overall, I would like to increase my rating to 5. The below is my response to each of the answers.
* A1: Line 54 and 191, the authors explained the motivation using the >99% regime.
* A2: Thanks for your explanation.
* A3: Makes sense.
* A4: If so, why is Section 4.1 (NAS Observations) relevant to your claim. This is still not clear.
* A5: Understood, but my point was connection from Section 4.1 to Section 4.2, which is still not clear.
* A6: Better to specify the base in the figures.
* A7: Okay.
* A8: I read it during the review, but it was too vague.
* A9: Even when $Sparisty$=96.84, optimizing $\alpha$ does not seem to be important. When $\alpha$ increased 90 times (from 0.01 to 0.9) at $\beta$ = 1.0, the accuracy changed by only 3% (and even increased, as opposed to A8). Thus, it is not still clear why optimizing $\alpha$ is crucial.
* A10: Understood.
* A11: Understood.
* A12: Hope to improve the presentation.
Title: Response to Authors
---
Reply to Comment 1.1.1:
Title: Response to Reviewer a28p
Comment: We really thank you for your careful read and detailed point-by-point response to our rebuttal, we truly appreciate it! We would like to add some further clarifications as below:
- In the shuffling experiment, we show a counter-argument against prior findings [1] which claim that PaI methods are insensitive in random shuffling. We indicate that when sparsity level increases (especially >99%) reshuffling significantly reduces the performance of subnetworks. We also provide a potential (and empirically observed) explanation to this phenomenon which is the significant decrease in the number of effective paths after reshuffling. This observation leads to our idea of node path balancing.
- The motivation behind the NAS observation is that, in the NAS experiment, we observe a general trend that networks with better performance tend to have a high number of effective paths and nodes together. This, together with the reshuffling observation, makes us believe that the node path balancing hypothesis/principle is essential, which motivates the rest of our work. In particular, we conjecture that the similar trend should hold for sparse NNs. To evaluate this, we decided to apply weight reshuffling on SNIP and Synflow, as reshuffling weights after pruning will intuitively destroy the good information flow (similar as effective path) that is preserved by these methods. As we know that SNIP and Synflow tend to prune weights with smallest impact of the synaptic flow. Our results are in line with our hypothesis: when sparsity increases, reshuffling layer-wise significantly impedes information flow with lower number of effective paths (though with more effective nodes), damaging the performance of these methods. Therefore, our preliminary experiments demonstrate that maintaining the good balance of effective nodes and paths are crucial for sparse NNs.
- Regarding choosing $\alpha$, our objective in Eq. 4~5 is to maximize simultaneously both effective nodes and paths. However, the relationship between the number of effective nodes and paths is complicated and depends on various aspects such as architecture, sparsity level etc. In some cases, increasing nodes (paths) leads to reduction in paths (nodes). Therefore, $\alpha$ is considered an adjusted term between two metrics. Choosing exactly alpha or beta is non-trivial, and it is sensitive to the subnetwork’s performance. As the reviewer pointed out, “when $\alpha$ increased 90 times (from 0.01 to 0.9) at $\beta = 1$ the accuracy changed by only 3%”. But we argue that this change is ineligible especially in the pruning context. As shown in Figure 8, if we can choose a good pair of alpha and beta, we will have superior performance subnetworks.
- We will make the above motivation much clearer in the camera-ready version.
[1] https://arxiv.org/abs/2009.08576 | null | null | null | null | null | null |
Adversarial Resilience in Sequential Prediction via Abstention | Accept (poster) | Summary: The authors study online learning under clean-label attacks. Since it is online learning, such attacks can be seen both as poisoning as well as evasion (adversarial examples). In this direction the authors propose the use of abstention when the classifier is not confident for a prediction and along these lines the main contribution is the introduction of a new loss function that penalizes abstention on non-adversarial examples but not on adversarial examples. The authors give an error (regret) bound of $O(d^2\log T)$ ) for classes with VC dimension d for time horizon T, assuming access to the marginal distribution over the iid examples. This bound is off by a factor of $d$ from the situation where it is known that data are coming iid from some fixed distribution but at the same time the authors allow clean-label attacks which essentially alter the distribution. Finally, the authors also give a $O(\sqrt{T})$ bound for concept classes of VC dimension 1 but without any access to the marginal distribution over the iid examples.
After the rebuttal:
We had an interesting discussion during the rebuttal period and I am happy to see the authors will be integrating the feedback from the reviewers and improve the final version of the paper. Therefore I am increasing the score on soundness and presentation from 2 to 3 and the overall rating of the paper from 3 (reject) to 6 (weak accept). Thank you for a very interesting paper!
Strengths: The paper is studying sequential prediction while the training examples can be poisoned in the sense that they are not drawn iid from some underlying distribution but otherwise respect the ground truth. This model of learning is somewhere between traditional machine learning where the learner has access to iid examples and online learning where the learners are studied under worst-case sequences of examples that are presented to the learners. The authors study the situation of sequential prediction, which is a term hardly found anywhere online and they never define in the text, but apparently has to do with the fact of studying machine learning algorithms that make predictions as they still learn from data (e.g., perceptron); the catch is to what extent the sequence of data can be adversarial and in this sense the authors allow perhaps the most adversarial of sequences. I believe this is an interesting problem though it is unclear how important this problem is, if it arises in the actual world, and where the motivation is coming from. The paper is sometimes well-written and sometimes not. Hence, there are issues both with the significance of the results and with the clarity.
I like and appreciate the concluding section with ideas for future work and a clearly-separated paragraph on broader impact. However, the authors do not have a similarly marked paragraph listing the limitations of their work.
Weaknesses: I think the paper has good potential, but it is not quite polished yet.
Several definitions are missing or are confusing in the paper. Examples follow:
- Sequential prediction is never defined as a concept.
- Learning in the realizable case vs proper learning: in line 28 you define the hypothesis class $F$ and in lines 133-134 you indicate that labeling occurs according to some function in $F$. This is actually proper learning, which is a special case of learning in the realizable case.
- In line 135 you forget to mention that you define what a version space is (and cite Mitchell) but you go ahead and use this notion in lines 255 and 269 (and perhaps elsewhere).
- Littlestone dimension is mentioned but is never defined (not even in the appendix) and no example is actually given.
- In lines 186-190: What is the relationship between $n$ and $T$? Does $n$ refer to unique examples, or does it allow potential repetitions and is thus the full number of draws from the (potentially adversarial) distribution?
- One of the main contributions of the paper is the new loss function that penalizes the learning algorithm when when it abstains on non-adversarial data but does not penalize the learner on adversarial data. This is captured with the equation on display in line 152 where we see that the total error that the learner suffers over the sequence of the first T examples is decomposed to misclassification error and abstention error. Since the characteristic function (I assume that is what 1{$\ldots$} is) is never defined, I understand from the misclassification error that the learning algorithm should receive a penalty of 1 every time it abstains since $\hat{y}_t \neq 1 - f^*(x_t)$. However, in Proposition 3.1 the learning algorithm abstains to predict in the disagreement region and the claim is that it has misclassification error equal to 0. However, I have no idea how this follows from the equation that is put on display in line 152.
- In Protocol 1, two lines before the end you claim that the learner receives $x_t$ but earlier, in lines 143-144 it is clear that the learner receives $\hat{x}_t$.
Near the end the paper becomes more dense and while I appreciate that the authors are attempting to enhance the presentation by leaving the proofs in the appendix (thank you) there are still situations where there is ambiguity (e.g., in Theorem 5.1, is the "corruption model with abstentions" something new?)
In line 248 "As we saw earlier..." -> Perhaps explicitly number the particular example and refer to that with a number?
As a last remark, I am not sure if NeurIPS is the appropriate venue for this line of work. I do not believe it is.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1. What is the relevance of adversarial examples and poisoning attacks to medical applications? How is this line of work motivated for both kinds of attacks?
Q2. Going one step further, what is the motivation for studying such methods as the ones you analyze in the paper?
Q3. Can the functions in $F$ be randomized?
Q4. What is the difference between the functions 1{$\ldots$} (line 152) and I{$\ldots$} in lines 295 and 301?
Q5. Is there an updated version of the loss function that you propose?
Q6. Line 317: "... the class is restricted ...". What class?
Q7. How tight are the bounds with respect to some actual datasets that motivate this line of work (e.g., from the medical domain that is mentioned)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I would prefer to see a clearly separated paragraph somewhere near the end of the paper indicating clearly what the limitations of the proposed approach are as well as to see some experiments with real data that show how close the bounds are to reality in situations that have motivated this line of research.
Flag For Ethics Review: ['Ethics review needed: Inadequate Data and Algorithm Evaluation']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the review. We will promptly fix typographical errors in a revision. Here we address the major concerns/questions from the reviewer.
**Proper versus realizable**: Realizability and properness are somewhat orthogonal desiderata in learning theory. Realizability is a requirement on _learning instances_ and implies that there is a function in the class that exactly fits the labels from the training set (potentially presented in a sequential manner). Properness is required from the _learning algorithm_ and implies that the algorithm uses a function from the class to make its predictions. These two requirements do not enforce each other in any way. For example, optimal learning algorithms even in the realizable setting may need to be improper and there are settings, such as learning in games, where the instance is not realizable yet a proper algorithm is desired.
**Definitions and notation**: We apologize for this omission of definition of some terms such as sequential prediction and version space that we believe are standard nomenclature in learning theory. We will include formal definitions to improve readability.
**Relationship between $n$ and $T$**: In that paragraph, $n$ is used to refer to the number of samples that are not corrupted while $T$ is used to refer to the total number of samples. In general, $n \leq T$ (could take any value in $\lbrace 0, 1, \ldots, T\rbrace$) but the algorithm does not have access to $n$ and we can only use it in the analysis. We also comment that the i.i.d. samples could in fact have repetitions.
**Notion of error**: We recall that we use the characteristic function in the standard way i.e. to be the function that outputs one when the condition in the input is satisfied and zero when the condition is not satisfied. Note that the notion of misclassification error is $\sum_i 1( \hat{y_t} = 1 - f(x_t) )$. This means that the algorithm suffers an error of one exactly when it does **not** abstain and predict the opposite label. For example, when $\hat{y_t} = \bot$ the algorithm does not suffer a misclassification error as $\hat{y_t} \neq 1 - f(x_t)$. In particular, when the algorithm abstains in the disagreement region, it does not make a prediction and does not suffer a misclassification error.
**Limitations**: Thank you for your suggestion, we will add a clearly marked paragraph highlighting the limitations of our current model and approaches.
**Appropriateness for Neurips**: We strongly disagree with this. Just as evidence for this, related paper working on similar setups (generally on beyond worst case analysis of sequential prediction) have appears in previous editions of NeurIPS:
- Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples NeurIPS 2020 (oral)
- On optimal learning under targeted data poisoning NeurIPS 2022
- Oracle-efficient online learning for beyond worst-case adversaries NeurIPS 2022 (oral)
- Towards optimally abstaining from prediction with OOD test examples NeurIPS 2020
**Responses to Questions**:
Q1&2. As we mentioned in the introduction, our model captures both data poisoning and OOD example. The key benefit of our model and algorithms is the fact that they produce 'certain' predictions. In particular, the model is mostly correct whenever it predicts despite out-of-distribution or adversarial data. Creating predictors equipped with this guarantee is very desirable in high-stakes applications where incorrect predictions are costly, and we need high-levels of reliability from our models. As for the medical scenario, the OOD setting is more natural. For example, if our model has seen data mostly from a particular population (say the majority population), then our model is likely to make incorrect predictions on patients not from this population. In such scenarios, it is better for the predictor to abstain than make an incorrect prediction on the minority population. Similar OOD interactions can occur in other settings such as self-driving cars, chatbots, etc.
Q3. We assume that the functions in $F$ are deterministic functions. But since the learning occurs in a sequential manner, the adversary can pick functions (in potentially a randomized manner) as long as they are consistent with the data (that is, the function matches the labels seen so far). One can also incorporate functions such that the label is random given the input but this is usually studied in the agnostic setting (not realizable). Most results in learning theory are first proven for the realizable case and the general agnostic case is solved via reduction to the realizable one. Our paper can be seen as putting forth the first step.
Q4. We use both $1$ and $I$ to both mean the indicator function. We will fix this in the revision.
Q5. We propose a loss corresponding to our setting given by the equation in line 152. But, we believe that thinking of our setting directly in terms of loss minimization is not fruitful. Since we do not have access to whether the examples the learner has seen are adversarially injected or i.i.d. ($c_t$ is unknown), the algorithm cannot compute its own loss (in particular, the learner cannot compute the abstention error). This highlights another major challenge of our setting.
Q6. Here by the class, we mean the class with respect which we are learning, referred to as $\mathcal{F}$ in the paper
Q7. The focus of our work is mainly theoretical. Our goal in this paper was to propose a model under which sets a realistic goal on handling adversarial/OOD data and showing the existence of efficient algorithms in this setting. We hope that our work will inspire practitioners to develop practical algorithms using the algorithmic ideas proposed in this work. For example, our uncertainty estimation measures based on shattered sets can be used to create heuristics that work for classes such as neural networks.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the response. I understand that the work has merit, but even after the response that the authors gave to the issues raised by the various reviewers, I think that the paper is not ready for acceptance yet. I have some minor follow-up comments to your comments.
**Proper versus realizable.** Proper learning and realizability are not orthogonal issues. Whenever one uses a proper learning algorithm, indeed they learn in the realizable setting. However, it is not necessarily true that when one has a learning algorithm that satisfies the realizability assumption, that we do proper learning. And indeed, representation-independent learning that satisfies the realizability assumption may provide more power to the learner; there are classic results along these lines as well as recent ones.
One formally needs to distinguish between the hypothesis class and the concept class, which you do not really do in your paper. Having said that you do say that labeling happens according to some function in $\mathcal{F}$ (lines 133-134) and moreover the model that you learn will be one of the functions in $\mathcal{F}$ -- that is your *hypothesis class* anyway. This *is* the definition of *proper learning* because what you are saying is that the hypothesis class coincides with the concept class, since the labeling of the ground truth function happens according to some function in the *hypothesis class*.
**Appropriateness to NeurIPS.** Among the papers that you mention, only one of them is really about online learning. Having said that, even if you can bring one (or few) papers out of thousands that get accepted every year does not mean you have a strong point. Furthermore, continual learning for neural networks leads to catastrophic forgetting.
**Answers to my questions.** Thank you for the answers to my questions. Since there is motivation from the medical domain (or even other domains) and you think that the paper is relevant to NeurIPS, I would like to see some experimental results from at least one of the domains that motivate this line of work, so that we can understand better the relationship between the theoretical results and the practical aspects of the proposed methods.
---
Reply to Comment 1.1.1:
Comment: **Proper versus realizable**: Proper and realizable are indeed orthogonal concepts. The statement “_Whenever one uses a proper learning algorithm, indeed they learn in the realizable setting_” is not necessarily true. To be in the realizable setting, the **label** in the data distribution needs to be exactly according to a function $f^* \in \mathcal{F}$ which implies there is no noise in the data (with respect to this function class). While proper learning only asks for the function used to make predictions by the learner to be in $\mathcal{F}$. Your statement implies that whenever we desire our learner to be proper, the data is not allowed any noise, which is not necessary. There are many non-realizable settings (such as learning in games) where one might want to be proper.
As you point out, we do state that we are in the realizable model, that is, our labels are always consistent with some $f^* \in \mathcal{F}$ but we do not restrict our algorithm to predict according to a function in the class.
In particular note that our algorithm decides the labels *after* looking at the point $x_t$ and no constraints are imposed on the choice of $\hat{y_t}$. So the final model we learn will not necessarily be in the class. In fact, since we are in the online setting (additionally with abstentions) and learner can decide to change the prediction function in a time dependent manner, the term *final model* needs to be interpreted carefully. Our algorithm or its analysis does not imply that we recover the original function. In fact our model is designed to smoothly interpolate between settings where learning is *impossible* and settings where learning is possible.
We are happy to add a clarification about our algorithms being improper in the paper.
**Appropriateness to NeurIPS**: We only gave a few examples of papers that are most closely related to our work that appeared in the recent NeurIPS, two of which appeared as orals. We could produce a much longer list if needed of theoretical work on online learning, since online learning and learning theory are fairly well represented at NeurIPS. Perhaps the strongest point we can make is that the ‘Call for papers’ for NeurIPS 2023 clearly states ‘online learning’ as a topic of interest, see bullet “General machine learning (supervised, unsupervised, **online**, active, etc.)” along with "Theory (e.g., control theory, **learning theory**, algorithmic game theory)".
**Catastrophic forgetting**: Note that our setting assumes that the true labeling function remains the same throughout time, and our guarantees ensure that we will not make too many errors on data drawn from the fixed distribution $\mathcal{D}$ throughout the sequence, therefore we do not need to worry about catastrophic forgetting. The goal of the learner is to make certain predictions on the i.i.d. data while handling adversarial/OOD data. Note that our total error is bounded by $o(T)$, hence the fraction of errors over time goes to 0 as $T \rightarrow \infty$, as is standard in online learning.
**Experimental evaluation**: We strongly disagree with the reviewer on their claim that “_Since there is motivation from the medical domain (or even other domains) and you think that the paper is relevant to NeurIPS, I would like to see some experimental results from at least one of the domains that motivate this line of work_”. Note that most theoretical work in machine learning is motivated by a real world problem, therefore it seems like the reviewer is claiming that for a paper to be appropriate for NeurIPS, experiments are mandatory. Our work poses a new paradigm for beyond worst-case learning in the online learning framework which is a very important problem in the field of online learning (see our related work). We subsequently give new algorithmic results showing that adversarial data can be handled without paying much overhead compared to the completely non-adversarial setting. We believe our work will inspire a lot of subsequent work in the ML community (as evident from the numerous extensions possible of our framework and connections to several different problems actively studied by the community including uncertainty estimation, conformal prediction, testable learning, and adversarial examples). We value the importance of empirical evaluations in general, however we do not believe experiments are essential to our paper. Our paper's main focus is purely theoretical and that is why we chose the category of 'learning theory', which our paper fits well in.
The reviewer says that ‘_I think that the paper is not ready for acceptance yet_’. **Could the reviewer please give us concrete reasons for this concern?** | Summary: This paper proposes a sequential prediction setting in which an adversary injects adversarial examples with clean labels, and the algorithm is allowed to abstain from predicting. This setting lies between the stochastic and the fully adversarial settings, which are known to be characterized by the VC and Littlestone dimensions of the hypothesis class.
The paper mainly proves the following two learning performance guarantees stated in terms of the VC dimension. (The algorithm incurs a unit loss whenever it outputs an incorrect label, or it abstains on a clean data poiunt.)
- An $O(d^2\log T)$ upper bound on the number of mistakes, where $d$ is the VC dimension and $T$ is the time horizon. This result requires the access to the marginal data distribution.
- An $O(\sqrt{T})$ error bound for the $d = 1$ case, without access to marginal distributions.
Strengths: This paper defines a new setting of sequential prediction that models the power of abstention against clean-label attacks (e.g., adversarial examples). I found the setting well-motivated and the technical material beautifully presented. The authors identified various future directions of work on this model as well as potential connection to the learning theory literature.
Despite the weaknesses discussed below, I (weakly) lean towards accepting this submission.
Weaknesses: - The error bounds are likely to be rather loose in terms of both $d$ and $T$.
- The assumption on having *perfect* access to the marginal data distribution is arguably too strong. While the approach should still work if the access to $\mathcal{D}$ is replaced with (unlabeled) samples from $\mathcal{D}$, the sample complexity would likely be $T^{\Omega(d)}$; see questions below.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Following up on the second point in the weakness part: To run Algorithm 1 using unlabeled examples from $\mathcal{D}$, it seems that we need an estimation of $\rho_k$ up to an $O(\alpha_k) = O(T^{-k})$ additive error, which in turn requires $T^{\Omega(d)}$ samples when $k = d$. Is there a more efficient way of using unlabeled examples, or would you conjecture that such a dependence is likely to be necessary?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: This work is purely theoretical and the limitations are the assumptions that underlie the problem setting as well as the theoretical results. These were addressed in the "Broader Impact" section in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful review. Here we address the questions/weaknesses pointed out by the reviewer.
**Looseness of the bounds**: For the known distribution setting, this is only loose by a factor of $d$ compared to the fully stochastic setting with no adversarial/OOD data. Since our algorithm needs to also handle adversarial/OOD data, it is not clear if this can be matched. For the unknown marginal setting, we do not yet know if it is possible for general VC classes. Hence, we do not know what the regret is likely to be in these settings
**Knowledge of distribution**: Yes, we agree that this assumption is very strong. However, even under this assumption, it is not obvious how to solve the problem since the algorithm needs to decide "in-sample" vs "out of sample" using only a single sample at test time and corrupted training samples. The only setting where we can remove the assumptions are the special cases of VC dimension 1 classes, and axis-aligned rectangles. Furthermore, our work follows a long tradition of work in the TCS and learning theory literature that construct learning algorithms under known marginal distribution. In fact, a considerable fraction of the literature on computational learning theory focuses on narrower assumptions such as the marginal known to be Gaussian or uniform on the Hypercube.
**Efficient use of unlabelled samples**: The reviewer correctly points out that the current algorithm would require a very large number of samples $T^{O(d)}$ to be able to get accurate enough estimates for the algorithm to work. We do not yet know of a way to make use of the unlabelled samples more efficiently. Recall that given $n$ i.i.d. samples, we can construct ${n \choose k} = O(n^k)$, $k$-tuples, so there is some hope that we do not necessarily need $T^{O(d)}$ samples if we can show stronger concentration using dependent sampling. However, it is not clear to us if this is possible in general. Perhaps, at the cost of a worse dependence on $T$ (like in our unknown distribution setting) we could get away with less accurate approximations of the shattering probabilities.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions! I don't have follow-up questions, and my overall evaluation of the paper remains positive. | Summary: The paper presents a new protocol for beyond-worst-case sequential prediction, incorporating the option of abstention. It introduces two main algorithms: the first achieves an error rate of $O(d^2 \log T)$ for classes with VC dimension $d$, while the second realizes an error of $O(\sqrt{T})$ for a specific instance of classes with VC dimension $1$.
Strengths: 1. An extensive literature search and insightful discussion of prior work spanning a variety of areas. This includes nuanced connection to topics such as beyond-worst-case sequential prediction, abstention-based learning, adversarially robust learning, adversarial examples, testable learning, etc.
2. A new protocol of beyond-worst case sequential prediction with the option of abstention.
Weaknesses: 1. Although the authors propose a new protocol, I struggle to identify novelty, as the primary difference appears to be the relatively straightforward inclusion of an additional abstention option.
2. While new algorithms have been introduced, they are specifically tailored to special classes, thereby limiting the significance of the paper. Why that special case of VC dimension 1 classes is important?
3. The proof techniques presented appear to lack significant novelty. What, then, is the underlying challenge?
4. The phrase "naturally be extended" has been referenced multiple times throughout the paper, such as on lines 147-148, 366-367, and 370-371. Firstly, these extensions are not immediately evident, and as such, a more detailed discussion would be beneficial. Secondly, if these extensions are indeed straightforward, including them in the paper could greatly augment its significance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review. Here we address the major concerns raised by the reviewer.
**Novelty of the model**: The inclusion of abstention itself is not sufficient to get any guarantees in this setting. We need the correct notion of regret to accompany this. For instance, an algorithm could just abstain on the entire sequence, but such an algorithm would not be useful. On the other hand, from classical results we know that asking the algorithm to predict always would lead to very pessimistic bounds (could lead to infinite errors even when the VC dimension is 1). Our model balances these two settings to get a beyond-worst case guarantee where the algorithm is required to predict correctly on the i.i.d. examples while allowing free abstention on adversarial/ood examples. This allows us to design algorithms that get similar error bounds to the setting with only i.i.d. examples and non-adversarial examples. Previous models like KWIK could not get such guarantees because they didn't account for an underlying i.i.d. distribution which is natural in several real-world applications.
Further, we propose a loss corresponding to our setting given abstention and misclassification. But, our setting is not merely a new loss minimization setting. Since we do not have access to whether the examples the learner has seen are adversarially injected or i.i.d. ($c_t$ is unknown), the algorithm cannot compute its own loss (in particular, the learner cannot compute the abstention error). So the algorithm needs to work without directly knowing its own total loss. This highlights another major challenge of our setting. We believe this more realistically captures OOD and adversarial examples in the real world.
**Focus on particular classes**: Our known distribution algorithm is not tailored to special classes and works for all function classes as long as they have bounded VC dimension (which is a necessary condition for learning even for iid data). Note that the unknown distribution case is significantly more challenging than this as we only have the examples in the past (which may be corrupted) to make the decision for the example at each time, and we do not have any information about what the actual distribution could look like. As noted in the paper, even for the simple class of thresholds, designing an algorithm that works is already non-trivial. Currently, we do not know if this is even possible for general VC classes. Therefore, towards addressing the general question of VC classes, we naturally considered the class of VC dimension 1 (for example, also considered in closely related work by Blum et. al. "Robust learning under clean-label attack." COLT 2021). Note that we also extend the results to axis-aligned rectangles which can have much higher VC dimension.
**Novelty of proof technique**: To our knowledge, the proof techniques in the paper are novel. We consider both the algorithm design (keeping track of the higher order shattering) and the analysis (the potential argument analyzing the tradeoff between abstentions and mistakes) as contributions of the paper. We do not know any other paper that uses these. Our algorithm in the known distribution setting is inspired by Hanneke's algorithm for active learning, however, the setting and our analysis differs significantly from theirs. The main challenge in our setting is to identify at each time step whether the example we see is in-distribution or adversarial/OOD. This is an extremely challenging problem given that our sequence so far could also have adversarial corruptions. It is not clear at all if this is even possible. Our algorithms create uncertainty measures by exploiting the structure of the underlying function class and the distribution in a highly non-trivial way, which we believe could have broader impact as a technique for uncertainty quantification for learning. **_Could the reviewer expand on the lack of novelty by pointing us to other works that use similar techniques?_**
**Use of "naturally be extended to""**: We apologize for using this phrase lightly, and will add details of the extensions for each case. Most of the usage of this phrase refers to the model we propose and not the algorithms/proofs. Here we explain these briefly:
- naturally be extended to the agnostic setting with adversarial/noisy labels
- To extend the model to the agnostic setting, we would need to modify the regret to include an additional error term to compare with the best possible error any predictor in the function class could get for the sequence.
- To extend to adversarial labels, we just allow the adversary to output any $y_t$
- naturally be extended to more general forms of prediction such as multiclass classification, partial concept classes, and regression
- To extend the model to multiclass classification, we would need to modify the prediction to allow for predicting from a set of $k$-labels and the misclassification loss would need to be modified to penalize error if the predicted label is not the true label
- Similarly for regression, we would need to allow prediction to be real-valued and the misclassification loss to be squared loss or the absolute loss.
- To extend to partial concept classes, we can keep the model essentially the same with the constraint that the adversary can only inject examples that are consistent with a hidden partial concept, that is, cannot provide an example where the partial concept does not have a label. With this definition, the model allows for more general concepts such as halfspaces with margin while allowing for similar analysis and algorithms.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. While I recognize the model's novelty, I maintain my stance that the results for a specific instance of classes with a VC dimension of 1 are constrained, and the primary technique employed seems limited in scope. Consequently, my score remains unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We understand your stance but we wanted to mention something brief regarding why VC one is very interesting theoretically.
VC one class include thresholds and the indicators of points. Though both these classes are simple, they have large Littlestone dimension (complexity of sequential prediction) and star numbers (complexity of active learning) respectively. Thus, an upper bound for all VC one classes rules out lower bounds purely in terms these other candidate complexity measures. This is particularly interesting for the case of the Littlestone dimension since it is the complexity for sequential prediction. Thus our VC one upper bound shows that these measures for closely related models *do not* capture the complexity in our model. and one can genuinely hope that there is a universal bound in terms of the VC dimension. | Summary: This paper proposes a pipeline and algorithms for machine learning prediction with abstention. The authors first propose the optimization framework to learn a model which allows abstention. They then consider different distributions and propose different algorithms for the learning process. Their theoretical analysis shows that the algorithms can achieve low error rate for misclassification and abstention.
Strengths: * The paper studies a variety of different contributions. The authors start with one-dimensional threshold scenario and shows the application of the abstention model. Then they provide algorithms for both known distribution and unknown distribution. The latter one is a more difficult setting and the authors provide a generalized algorithm for it based on the former one.
* The paper provides comprehensive error bound for the algorithms. I am not an expert in theoretical ML analysis, but as far as I can tell, the theory part does not contain significant mistakes.
Weaknesses: * I do not see significant real-world application of the proposed abstention framework with adversarial resilience. The authors mention the case of predicting patient's illness, in which case I understand why the abstention would be necessary. However, I do not see the necessity of caring about the adversarial resilience in such case - is it a (serious) threat that an adversary may launch a clean-label attack when people are training an illness classification task?
* The authors also emphasize the "sequential prediction" in addition to the abstention framework and adversarial resilience in the context. I am not fully clear why this is also an important point to consider - can we also apply the proposed pipeline on a static dataset that does not contain a sequential order but also benefits from the abstention pipeline?
* The paper does not contain empirical evaluation of the proposed algorithm, nor does it contain comparison with other works to show how good their bound is.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * What will be the practical use case for the abstention framework under the adversarial setting?
* Can the algorithm also be applied to get an abstention model for a non-sequential dataset?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss their limitation in their broader impact setup. The paper does not contain potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review. Here we address the major concerns raised by the reviewer.
**Significance of the model in the real-world**: The key benefit of our model and algorithms is the fact that they produce 'certain' predictions. In particular, the model is correct whenever it predicts despite out-of-distribution or adversarial data. Creating predictors equipped with this guarantee is very desirable in high-stakes applications where incorrect predictions are costly, and we need high-levels of reliability from our models. As for the medical scenario, the out-of-distribution setting is more natural to think about. For example, if our model has seen data mostly from a particular population (say the majority population), then our model is likely to make incorrect predictions on patients not from this population. In such scenarios, it is better for the predictor to abstain than make an incorrect prediction on the minority population. Similar out-of-distribution interactions can occur in other settings such as self-driving cars, chatbots, etc.
Further, we propose a loss corresponding to our setting given abstention and misclassification. But, our setting is not merely a new loss minimization setting. Since we do not have access to whether the examples the learner has seen are adversarially injected or i.i.d. ($c_t$ is unknown), the algorithm cannot compute its own loss (in particular, the learner cannot compute the abstention error). So the algorithm needs to work without directly knowing its own total loss. This highlights another major challenge of our setting. We believe this more realistically captures OOD and adversarial examples in the real world.
**Importance of considering sequential prediction**: Most real-world settings involve interactions with the model, and dynamic decision making, where the environment can change over time. Therefore, it is natural to consider prediction in the sequential setting. On the technical side, this setting is also much more challenging than the offline setting. In our case, the sequential part introduces several challenges:
- we need to decide to predict or abstain based on only one example
- the adversary can change their strategy over time adaptively based on our decisions.
That said, our techniques can be directly applied to a static dataset to achieve similar guarantees. But due to the relative simplicity of the static setting, more direct approaches have been previously studied as described in our related work section under the title “abstension-based learning”.
**Comparison to other bounds**: In this work we have proposed a new model, therefore a direct comparison to existing bounds is not possible. As described in the paper, the algorithms designed for the stochastic or worst-case settings do not get non-trivial guarantees in our model. In particular, prior to our work, it was not even clear if we could get any rates better than the Littlestone dimension (which can be infinite even when the VC dimension is finite) in the non-stochastic setting with arbitrarily many adversarial injections. We refer the reviewer to our extensive related work section that compares our results to closely related other work.
**Empirical evaluation**: The main focus of our work is mainly theoretical and we believe empirical evaluation distracts from the main message of the paper. Our goal in this paper was to propose a model under which sets a realistic goal on handling adversarial/OOD data and showing the existence of efficient algorithms in this setting. We hope that our work will inspire practitioners to develop practical algorithms using the algorithmic ideas proposed in this work. One particular direction that would be great to explore, would be to use our uncertainty estimation measures based on shattered sets to create heuristics that work for more complex classes such as neural networks.
**Practical use of abstention**: Currently the proposed algorithms are not efficient, however there is hope to create heuristics based on our approaches that could measure the model's uncertainty which would have great practical usage. Even in the setting of adversarial examples (as in vision), equipping the model with abstention would allow us to prevent adversarial attacks which are outside of the perturbation sets we have trained for. Our related work highlights some ways in which this has been used in practice in the static setting.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification and explanation. Although I am still concerned with the practicalness of the proposed algorithm, I agree with the authors that the considered sequential prediction scenario is an important one to study on and the proposed framework is a good theoretical guide. Therefore, I would raise my final score to 5. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On Evaluating Adversarial Robustness of Large Vision-Language Models | Accept (poster) | Summary: This paper aims to assess the adversarial robustness of vision components in large vision-language models, which is an increasingly significant issue due to the prevalence of such models. The experiments conducted in this study are extensive, encompassing evaluations on UniDiffuser, BLIP, BILP-2, Img2Prompt, MiniGPT-4, and LLaVA. The research investigates transfer-based attacks and also employs black-box queries to enhance the effectiveness of the evaluation method.
Strengths: The paper conducts extensive experiments on six prominent large vision-language models. By evaluating the adversarial robustness on these models, the study provides a comprehensive analysis that covers a wide range of architectures, ensuring the results are representative and applicable to the broader landscape of vision-language models.
One of the key strengths is that the paper demonstrates effective and realistic attacks specifically tailored for these models. By employing transfer-based attacks and utilizing black-box queries, the study goes beyond traditional methods to showcase the vulnerabilities and potential risks faced by large vision-language models in real-world scenarios.
Weaknesses: This is an evaluation paper. It would be nice to see how baseline defense, such as adversarial training on CLIP [1], would help mitigate the adversarial vulnerability of the proposed attack.
The method focuses on the vision part, which is easier to attack, what about attacks on the language?
[1] Mao et al. Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. ICLR 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are the computational resources used to reproduce the paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF.
***Q1: It would be nice to see how baseline defense, such as adversarial training on CLIP [1], would help mitigate the adversarial vulnerability of the proposed attack***
Thank you for the insightful comments. Following your suggestion, we apply the defense method such as adversarial training on CLIP [1], and evaluate our proposed attacking methods under the defense (we use the model checkpoints provided by [1]). The results are shown in **Table E** of the rebuttal PDF, where we employ the same experimental setups as in Table 2 of the main paper. As can be seen, even with the defense (i.e. adversarial training on CLIP by [1]), our attacking methods can achieve comparable performance to our original results, demonstrating the effectiveness of our attacking methods. These experiment results and discussion will be included in the revision.
***Q2: The method focuses on the vision part, which is easier to attack, what about attacks on the language?***
When an adversary attempts to fool a VLM, it can choose to attack either the vision or the language part, or both. We show in our paper that simply attacking the vision part is sufficient to fool the entire VLM into returning targeted responses. As a result, fooling a VLM by attacking the vision part would be a more effective strategy than attacking the language part. Still, attacking VLMs through languages would be intriguing directions to pursue, and we will include relevant discussion in the revision.
***Q3: What are the computational resources used to reproduce the paper?***
In Table I of Section A (in the Appendix), we have included detailed computational resources used in each experiment (Figure/Table) of our work, including (1) the type of GPU we used, (2) GPU hours, and (3) carbon emission. On the other hand, we hope to highlight that our proposed method is computationally efficient and can run on a single GPU.
***References:*** \
[1] Mao et al. Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. ICLR 2023
---
Rebuttal Comment 1.1:
Title: A follow up question
Comment: The reviewer thanks the author, and the rebuttal answered my question 2,3.
For Q1, I have a follow-up question, what adversarial variant does the experiment in Table E use? Is it the prompting or finetuning? Since the adversarial training is not almost not defending the attack, the reviewer is worried that the adversarial training experiment is not optimally conducted. More details should be provided, like what dataset the method finetuned on, how many epochs, and explanations on why the defense results are worse than the finding in [1].
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your feedback. We used the checkpoint corresponding to the adv-CLIP called `TeCoA w/ Finetuning’, as provided in https://github.com/cvlab-columbia/ZSRobust4FoundationModel (the official code released by [1]). In Table E, we applied adv-CLIP to defend our transfer-based MF-ii (i.e., using adv-CLIP as the surrogate model for MF-ii), and we left the clean-CLIP encoder in Unidiffuser unchanged because we observe that **substituting it with adv-CLIP will significantly degrade the clean performance of Unidiffuser**.
Nevertheless, we focus on the robust performance in this response, and in Table F below, we report additional results of using adv-CLIP to defend our query-based MF-tt (i.e., substituting the clean-CLIP encoder in Unidiffuser with adv-CLIP), or defending both MF-ii and MF-tt.
***Tabel F: Results of attacking Unidiffuser***
| Method | RN50 | RN101 | ViT-B/16 | ViT-B/32 | ViT-L/14 | Ensemble |
|---------------|:-----------:|-----------:|-----------:|-----------:|-----------:|-----------:|
| Clean image | 0.417 | 0.415 | 0.429 | 0.446 | 0.305 | 0.402 |
| MF-ii (adv-CLIP) + MF-tt (adv-CLIP) | 0.584 | 0.549 | 0.583 | 0.611 | 0.456 | 0.557 |
| MF-ii (clean-CLIP) + MF-tt (adv-CLIP) | 0.698 | 0.679 | 0.719 | 0.731 | 0.632 | 0.691 |
| MF-ii (adv-CLIP) + MF-tt (clean-CLIP) | 0.739 | 0.724 | 0.755 | 0.766 | 0.675 | 0.731 |
| MF-ii (clean-CLIP) + MF-tt (clean-CLIP) | 0.748 | 0.734 | 0.759 | 0.773 | 0.684 | 0.739 |
Please note that for transfer-based MF-ii, **the adversary can freely choose its surrogate model (e.g., either clean-CLIP or adv-CLIP)**, so the results in Table F demonstrate that even if the encoder in Unidiffuser is enhanced by adv-CLIP (i.e., MF-tt (adv-CLIP)), our attacks can still apply clean-CLIP in MF-ii (i.e., MF-ii (clean-CLIP)) to conduct a good initialization and efficiently evade Unidiffuser by combining with MF-tt (i.e., **the results in the row of MF-ii (clean-CLIP) + MF-tt (adv-CLIP)**).
The reason why adv-CLIP may be less effective at protecting large VLMs such as Unidiffuser is that the CLIP encoder only occupies a small portion of these VLMs (e.g., the CLIP encoder has 300M parameters, Unidiffuser has 1.4B parameters, and MiniGPT-4/LLaVA has more than 13B parameters). As a result, a strong defense may necessitate improving both the CLIP encoder and the language model in these VLMs. We will incorporate these discussions into the revision and conduct additional experiments on other large VLMs enhanced by [1].
***Reference:*** \
[1] Mao et al. Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. ICLR 2023 | Summary: The paper evaluates the pixel-space adversarial robustness of large vision-language models (VLMs), where the targeted attack has only black-box access to the large VLM systems. The paper introduces two adversarial strategies: transfer-based and query-based. The transfer-based strategy performs white-box attacks on surrogate models like CLIP and BLIP and transfers to large VLMs including MiniGPT-4 and LLaVA. The query-based strategy uses transfer-based priors to improve the efficacy of targeted evasion against these VLMs. The authors demonstrate the effectiveness of these strategies through comprehensive evaluations.
Strengths: 1. The paper is well-presented with good motivation, and the authors provide a clear storyline with visual demonstrations to validate the proposed approach.
2. It is novel to optimize the adversarial objective by matching image-image features (MF-ii) on large VLMs. The following Query-based attacking strategy (MF-tt) further enhances the effectiveness of the attack.
3. The authors analyze the approach from multiple perspectives. The paper includes discussions on the influences of the perturbation, and the Grad-CAM interpretation visualizes the model decision well.
Weaknesses: 1. The black-box setting is questionable. Although the adversary only has black-box access to the victim system $p_{\theta}$, the transfer-based attack has white-box access to foundation components like CLIP. There is a high chance that the victim system shares mutual information (e.g., $p_{\theta}$ contains CLIP / $p_{\theta}$ trains on the same vision-language dataset / $p_{\theta}$ has similar model architecture) with these white-box components. It will be beneficial if the authors elaborate more on the source of transferability.
2. Although the Query-based attacking strategy (MF-tt) is feasible, the cost of such a process can be high. What is the forward inference cost (e.g., time of API call) per query, and how many queries are needed per attack? Authors should show the cost effectiveness of query attacks.
3. The experiments mainly focus on imposing MF-ii as a prior for the following MF-tt queries. It will be constructive If authors demonstrate more on the sole use of MF-tt (which aligns with the black-box setting and is more computationally lightweight) without a strong MF-ii prior for the query attack.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The authors give Section 4.3 the title *VLMs induce semantic adversarial perturbations*. However, what Fig. 5 shows is not semantic adversarial examples but inference results of pixel-wise adversarial noises. The term semantic adversarial perturbation commonly indicates semantic variations that fail the model [1]. Is Fig. 5 trying to discuss *does VLM adversarial perturbations induce semantic meanings*?
2. Since the pixel-space adversarial noises are not intuitive/interpretable to human observers, are there any future discussions on generating VLM attacks based on physically-observable attributes or languages [1,2,3,4]?
3. To conclude, this paper is a timely contribution to the field. I will consider revising the rating based on the soundness of the authors' responses.
[1] ICCV 2019, Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers.
[2] ECCV 2020, Semanticadv: Generating adversarial examples via attribute-conditioned image editing
[3] CVPR 2023, Zero-shot Model Diagnosis
[4] ICLR 2023, Diagnosing and Rectifying Vision Models using Language
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the authors have stated the potential limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF.
***Q1: There is a high chance that the victim system shares mutual information with these white-box components. It will be beneficial if the authors elaborate more on the source of transferability***
Indeed, existing open-source large VLMs are primarily composed of publicly available modules (e.g., CLIP and Vicuna) and are finetuned on some common vision-language datasets. These facts increase the chance that the surrogate model used by black-box adversaries shares mutual information with the victim model, essentially making the victim model vulnerable to adversarial transferability.
Nevertheless, we want to emphasize two points to demonstrate the significance of our transfer-based attacks. First, even if $p\_{\\theta}$ contains CLIP, the CLIP module only occupies a small portion of the large VLMs (e.g., a CLIP encoder has 300M parameters and LLaVA/MiniGPT-4 has 13.3B/14.1B parameters), where the main model capacity is involved in large language models that are unseen to our transfer-based attackers. Second, while knowing the CLIP module may be sufficient for *untargeted* attacks against large VLMs, we focus on much more difficult *targeted* settings, and it is non-trivial to fool large VLMs to return targeted responses solely based on adversarial transferability. On the other hand, as discussed in our response to **Q3**, we demonstrate that using query-based MF-tt alone is still effective against various VLMs, without transfer-based MF-ii as the prior.
***Q2: What is the forward inference cost (e.g., time of API call) per query, and how many queries are needed per attack?***
As stated in Line 176, we set query times $N=100$ and update the adversarial images by 8-steps PGD for MF-tt, resulting in a total query time of $800$ per attack in our setup. In **Table C** of the rebuttal PDF, we provide the time of API call per query for different large VLMs used in our experiments. We will include more details in the revision.
***Q3: It will be constructive If authors demonstrate more on the sole use of MF-tt (which aligns with the black-box setting and is more computationally lightweight).***
In **Table D** of the rebuttal PDF, we show the experiment results of using only MF-tt over various VLMs (e.g., Unidiffuser, BLIP-2, LLaVA). Our results indicate that our query-based attack MF-tt is effective against various VLMs. However, when MF-ii is used as the prior, it can achieve a better performance. More results will be provided in the revision.
***Q4: Is Fig. 5 trying to discuss whether VLM adversarial perturbations induce semantic meanings?***
Thank you for pointing this out. Yes, in Section 4.3 and Fig. 5, we attempt to discuss whether the crafted adversarial perturbations induce any semantic meanings. We will correct the claims in the revision.
***Q5: Are there any future discussions on generating VLM attacks based on physically-observable attributes or languages?***
Thank you for the insightful comments. In our paper, we constrain the pixel perturbations to be small (e.g., $\\epsilon=8$) enough to be visually imperceptible (or difficult to perceive) by humans, and we show that simply attacking the visual modality is sufficient to fool the entire VLMs. Nonetheless, attacking VLMs through physically-observable attributes on the vision part (e.g., adding eyeglasses or other accessories to the human portrait [1], modifying hair color or style [2, 3]) or languages [4] would be promising directions to pursue, and we will include relevant discussion on these works in the revision.
***References:*** \
[1] Joshi et al. Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers. ICCV 2019 \
[2] Qiu et al. SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing. ECCV 2020 \
[3] Luo et al. Zero-shot Model Diagnosis. CVPR 2023 \
[4] Zhang et al. Diagnosing and Rectifying Vision Models using Language. ICLR 2023
---
Rebuttal Comment 1.1:
Comment: I appreciate the revision and the insights provided by the rebuttal. The experiments are sufficiently good. I will raise the rating to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: We appreciate your detailed comments and suggestions. We will polish our paper further and incorporate new results into the final revision. Thank you again! | Summary: This paper focuses on black-box targeted adversarial attacks on multimodal vision/language models via transfer. They observe particular vulnerability to transfer attacks because an adversarial image can be constructed in a fully-differentiable manner w.r.t. a model like CLIP and then transferred over to the black box target model. Their final approach involves matching the adversarial image features to both text and image features in the surrogate models used for training.
Strengths: I think the combined method MF-it and MF-ii is clever, and I think the results are impressive.
I find the paper clearly valuable and timely since models with these vulnerabilities are currently being deployed and their vulnerabilities could pose risks. I appreciate the discussion of this in the conclusion.
I think that the experiments are compelling. I do not spot errors.
Weaknesses: (Minor) I find the figs to be generally cluttered.
I would have liked to see experiments related to real-world concerns such as ones that were meant to illustrate real ways that these attacks could cause problems. For example, evading NSFW filters or making language models provide misinformation about an image that is used in a query.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could these vulnerabilities have been foreseen (probably yes)? Have they been addressed at all by the people open-sourcing VLMs? What are the most immediate risks posed by deployed VLMs? What does this mean for AI norms and governance? I would welcome more discussion of these in the discussion section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This paper did not focus on defenses for these attacks, but I would not have expected it to.
The use of GradCAM to analyze the attacks is limited and based on visual inspection. Also GradCAM and similar methods are well known to have many problems. Given my familiarity with some of the problems with feature attribution methods, these experiments don't really add much to the paper in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF.
***Q1: (Minor) I find the figs to be generally cluttered***
Thank you for the comments. In the revision, we will rearrange our figures to make the content more clear.
***Q2: I would have liked to see experiments related to real-world concerns. For example, evading NSFW filters or making language models provide misinformation about an image that is used in a query***
In this work, we used MiniGPT-4, LLaVA, and Img2Prompt to evaluate the visual question-answering (VQA) task, in which an image is indeed used in a query and the adversarial goal is to make language models provide misinformation. The results have been demonstrated in Figure 3 (main paper), Figures IV and V (Appendix). These results are highly relevant to real-world concerns, because we can fool the model into returning *targeted* outputs, which could contain any predefined misinformation or bypass NSFW filters.
***Q3: Could these vulnerabilities have been foreseen (probably yes)? Have they been addressed at all by the people open-sourcing VLMs?***
These vulnerabilities were most likely foreseen by OpenAI, and we believe this is one of the reasons why OpenAI is delaying the release of GPT-4's visual input. Unfortunately, our results indicate that these vulnerabilities are rarely addressed or even paid attention by the people open-sourcing VLMs. Existing open-source VLMs still primarily compete for the performance on standard benchmarks, while these VLMs are mostly made up of publicly accessible modules that do not take adversarial robustness into consideration.
***Q4: What are the most immediate risks posed by deployed VLMs? What does this mean for AI norms and governance? I would welcome more discussion of these.***
Thank you for the questions. There have been attempts to deploy large VLMs into robots or autonomous driving systems, where the visual inputs could be the images captured by a camera (i.e., interaction with environments) and the language inputs could be user commands (i.e., interaction with humans). In these scenarios, adversarial attacks on VLMs could mislead robots or autonomous driving systems, inducing them to engage in risky behaviors. Improved adversarial robustness of VLMs and more general multi-modal models are thus critical for AI norms and governance.
***Q5: This paper did not focus on defenses for these attacks, but I would not have expected it to***
Following your suggestion, we apply a recent defense method that adversarially trains a CLIP model [1], and evaluate our proposed attacking methods under the defense (we use the model checkpoints provided by [1]). The results are shown in **Table E** of the rebuttal PDF. As can be seen, even with the defense method by [1], we still can achieve comparable performance to our original results, demonstrating the effectiveness of our attacking methods. These experiment results and discussion will be included in the revision.
***Q6: The use of GradCAM to analyze the attacks is limited and based on visual inspection. Also GradCAM and similar methods are well known to have many problems***
Thank you for your constructive feedback; we will include your comments in the paper to clarify these limitations in the revision.
***References:*** \
[1] Mao et al. Understanding Zero-Shot Adversarial Robustness for Large-Scale Models. ICLR 2023
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for the updates and comments. I have few further comments to discuss. I think my rating will most likely stay at a 6 but may increase to a 7. I will be paying attention to the rebuttal process and other reviews in the next week.
What if any changes are being made to the paper regarding 3 and 4? Could the authors paste the updates in response to 6 into a reply?
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: Thank you for your timely feedback.
***Q (a): What if any changes are being made to the paper regarding 3 and 4?***
Since an additional content page is allowed for the final revision of NeurIPS (if our paper is accepted), we will add additional subsections or paragraphs in Section 5 to involve the contents we responded to *Q3* and *Q4*. These contents include discussions such as adversarial vulnerabilities that could be foreseen but have not yet been addressed by the people open-sourcing VLMs; as well as AI norms, governance and the immediate risks posed by deployed VLMs.
(We are unsure whether we understand this question correctly. Please let us know if there are any additional details you'd like to discuss.)
***Q (b): Could the authors paste the updates in response to 6 into a reply?***
We will include more discussions about the motivation for visual inspection and the limitations of using GradCAM in Section 4.3 (after Lines 221-226), as summarized below:
As to the motivation, we followed Img2Prompt [1] and used GradCAM to compute and visualize the relevancy score of image patches with respect to the question (i.e., `query’), as demonstrated in our interpreting analyses (Figure 7 in the main paper and Figure VI in the Appendix). We clarify that this visual inspection only provide an *intuitive* way to interpret our attacking mechanism.
Nonetheless, we emphasize that the use of GradCAM as a feature attribution method has limitations [2]. GradCAM's inability to localize multiple occurrences of an object in an image has been observed. While multiple occurrences of the same object in an image are common, GradCAM causes inaccurate localisation of a heatmap for an object (due to the global average pooling operation). On the other hand, the continuous upsampling and downsampling processes may also result in signal loss prior to visualization.
***References:*** \
[1] Guo et al. From Images to Textual Prompts: Zero-shot Visual Question Answering with Frozen Large Language Models. CVPR 2023 \
[2] Chattopadhay et al. Grad-Cam++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. WACV 2018 | Summary: The authors propose to generate adversarial attacks on different vision-language models (VLM) like BLIP, MiniGPT4, and UniDiffuser. The proposed method is simple and straightforward. To perturb the image, the authors propose to maximise the similarity (inner dot product) between the image-image / text-text / image-text features of CLIP. The attacked image is then transferred to the black box VLM models, which generates some text on these images. The similarity between the generated and the target text is calculated using unnormalised cosine similarity on the CLIP text encoder outputs. Maximising the similarity between the CLIP encodings of clean and target text and CLIP encodings of the clean and generated images of target text doesn't require any gradient approximation. But maximising the similarity between the target text encodings and generated text requires zero-order optimisation to get the gradients on the image. The authors demonstrate that the proposed attack completely fools the VLM models to generate text very similar to the target text.
Strengths: * The proposed method is incredibly simple and, therefore highly valuable for the community
* The proposed method evaluates different types of VLM models, which makes the claims of the proposed method stronger.
* The problem identified is timely and highlights the importance of building robust VLM models in future.
Weaknesses: * The proposed method shows the results of the PGD-100 attack, while in most of the black box attacks, it is observed that stronger attacks can overfit to the surrogate model resulting in worse transferability. Therefore, it would be good if the authors can also share the results on weaker attacks like PGD-10 and FGSM.
* I think comparing the text quality using the CLIP score may not give the true picture. I would be nice if the authors can try some other metrics to compare the text quality.
* For some of the approaches like LLaVA and MiniGPT-4 in Table-2, the results obtained using the proposed attacks like MF-it and MP-ii are very close (almost similar) to the clean image. Does this mean that the model is not able to get fooled? It would be nice if the authors can present results using some metrics which can quantify whether the model is fooled or not. This will help in getting a deeper understanding of the proposed method.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: It would be great if the authors can address the raised concerns in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have properly addressed the limitations and negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your supportive review and suggestions, we have uploaded a rebuttal PDF.
***Q1: It would be good if the authors can also share the results on weaker attacks like PGD-10 and FGSM.***
In **Table A** of the rebuttal PDF, we follow the Reviewer’s suggestion to report additional results using fewer PGD steps (e.g., 10-steps and 50-steps) against different large VLMs (e.g., Unidiffuser, BLIP-2, and LLaVA). As can be seen, PGD-100 outperforms PGD-50 and PGD-10, though the benefits tend to saturate as more PGD steps are added. This indicates that PGD-100 is still not overfitting to surrogate models, which could be attributed to the complexity of large VLMs versus traditional image classification models.
***Q2: It would be nice if the authors can try some other metrics to compare the text quality.***
Following your suggestion, we compare the text quality by computing the embedding similarity between the generated response and the targeted text using FlagEmbedding [1], a state-of-the-art text embedding model (ranked first on MTEB and C-MTEB benchmarks). The findings are presented in **Table B** of the rebuttal PDF, where we employ the same experimental setups as in Table 2 of the main paper. As demonstrated, using FlagEmbedding yields similar results to using the CLIP score, and we will add these results in the revision.
Furthermore, we would like to clarify that in the main paper, we used various text encoders (ResNet-50/101, ViT-B/16, ViT-B/32, and ViT-L/14) to ensure a thorough assessment of text quality, and we also provide multiple visualized demonstrations (Figures 1, 2, 3 in the main paper and Figures II, III, IV, V in the Appendix) for human evaluation.
***Q3: For some of the approaches like LLaVA and MiniGPT-4 in Table 2, the results obtained using MF-it and MP-ii are very close to the clean image. Does this mean that the model is not able to get fooled?***
Indeed, we observe that models with larger capacities (13.3B parameters for LLaVA and 14.1B parameters for MiniGPT-4) are more difficult to get fooled using only transfer-based attacks like MF-it and MF-ii, compared to those with smaller capacities such as BLIP and UniDiffuser. This could be because LLaVA and MiniGPT-4 both use a large language model (Vicuna-13B [2]) as the text generator, which is relatively robust to transfer-based attacks. Therefore, we use transfer-based MF-ii to initialize the adversarial perturbation and query-based MF-tt to further tune the perturbation (see Figure 4 for details), which is effective as shown in Table 2 and Figure 3 (more results can be found in the Appendix).
***Q4: It would be nice if the authors can present results using some metrics which can quantify whether the model is fooled or not***
We additionally conduct user studies to determine whether the model is fooled or not. We specifically categorize the generated responses into four types, which are as follows: *Type-(1)*: the model is completely fooled, i.e., it can generate the targeted response; *Type-(2)*: the model is partially fooled and can generate concepts/information related to the target text/image; *Type-(3)*: the model is partially fooled and can generate non-targeted response that is neither related to the target text/image nor the original clean image; *Type-(4)*: the model cannot be fooled, We randomly selected 100 VLM responses and asked users to vote on whether these models were fooled. The results are as follows:
| VLM | Type-(1) | Type-(2) | Type-(3) | Type-(4) |
|---------------|:-----------:|-----------:|-----------:|-----------:|
| Unidiffuser | 84.5\% | 8\% | 5\% | 2.5\% |
| BLIP-2 | 86\% | 9.5\% | 2.5\% | 2\% |
As shown in our user studies, these VLMs can be fooled to a large extent in most cases, similar to the visualization results in our figures. Due to time constraints, we only evaluate Unidiffuser and BLIP-2 here, but we will conduct user studies on other large VLMs in the revision.
***References:*** \
[1] https://github.com/FlagOpen/FlagEmbedding \
[2] Chiang, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for providing a comprehensive rebuttal. I appreciate the authors for accepting that their method is difficult to apply for larger VLMs. I think this work shows a very interesting direction and would be valuable for the community. I will keep up my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: We appreciate your detailed comments and suggestions. We will polish our paper further and incorporate new results into the final revision. Thank you again! | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a rebuttal PDF that includes:
- **Table A**: CLIP score ($\\uparrow$) with fewer PGD steps against different VLMs;
- **Table B**: FlagEmbedding score ($\\uparrow$) as additional text quality metric against different VLMs;
- **Table C**: The average time of API call of each query of different VLMs;
- **Table D**: CLIP score ($\\uparrow$) with the sole use of MF-tt against different VLMs;
- **Table E**: CLIP score ($\\uparrow$) with the defense method applied (Mao et al. ICLR 2023).
Pdf: /pdf/1070cbfbd2bd1187cd40b7dffb0fe841b1cf4e66.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Practical Equivariances via Relational Conditional Neural Processes | Accept (poster) | Summary: This paper is an extension of the conditional neural process. The primary contribution is to augment the conditional neural process with the relational structure. The developed model was examined in synthetic regression, Bayesian optimization, Lotka-Volterra simulation, and others.
Strengths: 1. This paper is complete in organization and easy to understand.
2. The relational inductive biases mentioned in this work are crucial in some scenarios but were already studied in previous NP related work (See weakness).
Weaknesses: **1. Unclear Motivation.** The motivation for using relational inductive biases is not well clarified in the Introduction section, e.g., in which scenarios or datasets the relational information is required. Meanwhile, I am not sure of the definition of relational in this work, and it seems the concept is similar to that in Set Transformer .
**2. Lack of Novelty.** The novelty of this work seems relatively limited. There have been extensive works to incorporate equivariance into neural process models, such as work in [1-4]. As for the use of relational inductive bias, especially when the input/output are high dimensional, this has appeared in work [5].
**3. Incomplete Analysis.** In Table (1)/(2), the kl divergences are reported, but I did not find what it means in the results. Notably, most empirical analysis claims the proposed model outperforms others, but the reasons behind these observations, e.g., what kind of equivariance matters, are unclear.
**4. Missing benchmarks and baselines.** Images naturally hold translation equivariance/invariance properties and are more appropriate for evaluation. This work fails to compare with other equivariant (C)NPs [1-4].
**5.** The contribution and the organization of the paper are mixed in Line 49-Line 56.
**References:**
[1] Kawano, M., Kumagai, W., Sannai, A., Iwasawa, Y., & Matsuo, Y. (2021). Group equivariant conditional neural processes. arXiv preprint arXiv:2102.08759.
[2] Holderrieth, P., Hutchinson, M. J., & Teh, Y. W. (2021, July). Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes. In International Conference on Machine Learning (pp. 4297-4307). PMLR.
[3] Markou, S., Requeima, J., Bruinsma, W., Vaughan, A., & Turner, R. E. (2021, October). Practical Conditional Neural Process Via Tractable Dependent Predictions. In International Conference on Learning Representations.
[4] Foong, A., Bruinsma, W., Gordon, J., Dubois, Y., Requeima, J., & Turner, R. (2020). Meta-learning stationary stochastic process prediction with convolutional neural processes. Advances in Neural Information Processing Systems, 33, 8284-8295.
[5] Wang, Q., & Van Hoof, H. (2022, June). Model-based meta reinforcement learning using graph structured surrogate models and amortized policy search. In International Conference on Machine Learning (pp. 23055-23077). PMLR.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: No.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: Limitations are highlighted in Section 7. Here it seems that the runtime complexity of predictive distributions is higher than that of Gaussian processes, which is quadratic w.r.t. the number of data points. Remember that one of the motivations in CNPs or NP-like models is the reduction of runtime complexity.
Other limitations are included in the weakness section.
The definition of equivariance is incorrect.
Some missing citations [1-5].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort spent reading our paper. We would like to clarify what we perceive as misunderstandings of the nature and contribution of our paper. Our core contribution is in providing a simple, effective way to build exact equivariances (with a focus on translational equivariance) directly in the CNP architecture, *in a way that easily scales to higher input dimensions*. This is something that is currently not addressed in existing work, including the references you provided. Here we would like to clarify the perceived weaknesses of our paper, views which we do not share and explain further below.
### Weaknesses
> *1. Unclear Motivation. The motivation for using relational inductive biases is not well clarified in the Introduction section, e.g., in which scenarios or datasets the relational information is required.*
We clarified our motivation in the Introduction: proposing a simple method to implement some exact equivariances into CNPs (particularly, translational equivariance), in a way that scales efficiently with input dimension. The relational approach is how we achieve our goal.
> *Meanwhile, I am not sure of the definition of relational in this work, and it seems the concept is similar to that in Set Transformer.*
We use Set Transformers to achieve invariance to permutations of the context set (see line 128). Unlike Set Transformers, we encode data points via the comparison function $g$. We call our encoding (and method) “relational” because data points are encoded based on how they compare or relate to each other.
> *2. Lack of Novelty. The novelty of this work seems relatively limited. There have been extensive works to incorporate equivariance into neural process models, such as work in [1-4]. As for the use of relational inductive bias, especially when the input/output are high dimensional, this has appeared in work [5].*
We firmly disagree with this claim, which misses our main contribution. All existing methods for incorporating exact equivariances in CNPs (ConvCNP and variants [3,4], SteerCNP [2], EquivCNP [1]) require at some point a regular discretization (lattice) of the input space and convolution operations. This limits in practice the number of input dimensions these methods are applied to (often up to 1 or 2 equivariant dimensions), as lattice-based methods scale exponentially with the input dimension.
Thank you for the references. Note we already had [3,4] in our paper. **We will include a more extensive discussion of [1-5] in the Related Works.** Briefly:
- [2] introduces SteerCNP, which directly generalizes ConvCNP to other equivariances and suffers from its exact same scaling issues (e.g., Section 8 of [2], “*One practical limitation of this method is the necessity to discretise the continuous RKHS Embedding, which can be costly*”). [2] mentions the possibility of using alternative architectures to bypass this issue, but this is not developed in the paper.
- [1] introduces EquivCNP, based on LieConv (Finzi et al., *ICML* 2020). While LieConv per se can operate on irregular point clouds, EquivCNPs require constructing a lattice over the input space (see Algorithm 1 and Section 4.3 of [1]), thus reinstating the scalability problem.
- [5] aims to learn arbitrary relational inductive biases in a NP model using a relational graph structure on the context set. While interesting, this differs from our goal of directly building *exact equivariances* in the CNP architecture.
> *3. Incomplete Analysis. In Table (1)/(2), the kl divergences are reported, but I did not find what it means in the results. Notably, most empirical analysis claims the proposed model outperforms others, but the reasons [...] are unclear.*
Our RCNP models implement specific equivariances (translational or isotropy), as specified in each experiment. If our method works better, it is because encoding that equivariance is useful for the task; we will add an explanation.
> *4. Missing benchmarks and baselines. Images naturally hold translation equivariance/invariance properties and are more appropriate for evaluation.*
We added new results on MNIST and CelebA (see Table 3 and Fig 1 in the rebuttal PDF). In our paper, we focused on higher-dimensional tasks to showcase the capabilities of our model.
> *This work fails to compare with other equivariant (C)NPs [1-4].*
This is not correct. We compared against ConvCNP and ConvGNP [3], where applicable. Due to the mentioned scalability issues, we could not test these methods above input $d_x=2$. ConvNPs [4] have the same scalability issues as ConvCNP and are generally outperformed by ConvGNP (see [3]). The other equivariant CNPs [1,2] have the same scalability issues. Moreover, SteerCNP [2] is a strict generalization of ConvCNP, so there is no need to run separate tests in the translational equivariant case.
> 5. The contribution and the organization of the paper are mixed.
We rewrote a separate paragraph, distinct from the paper organization, with the contributions described at the beginning of this response.
### Limitations
> *Here it seems that the runtime complexity of predictive distributions is higher than that of Gaussian processes [...].*
This is not correct. Unlike CNPs which are amortized, GPs still require training (fitting the kernel parameters), which is generally *cubic* in the number of data points. There are quadratic techniques but they require tuning for good results (Maddox et al., 2021). Regarding scalability of FullRCNP, see also our answer to reviewer N64F.
> *The definition of equivariance is incorrect.*
Our definition of equivariance is correct and equivalent to the standard definition for general mappings, $\tau f(X) = f(\tau X)$. We added a section in the Supplement showing that our definition of equivariance of a prediction map based on the invariance of its representation (Eq. 3) is equivalent to the standard definition, as found e.g. in [2] or in the ConvCNP paper (Gordon et al., 2020).
---
Rebuttal Comment 1.1:
Title: Major Concerns Still Exist and Keep Scores
Comment: Thank the author for the detailed feedback. After reading the review, I tend to keep my score since major concerns still exist, and I will explain them as follows.
***1. Set Transformer & Contributions & Novelty (Major Concern)***
Note that
(1) the author claimed "**the Set Transformer was used to achieve invariance together with a comparison function g, however, this work didn’t cite the Set Transformer at all**". As this is a core module in learning invariance and equivariance, this cannot be ignored and the contribution should be clarified in practice.
(2) the Set Transformer can learn the equivariant map, while **the model structure of this work resembles too much on that**.
(3) The exact equivariance is a bit vague in descriptions. Meanwhile I disagree that previous equivariant CNP work cannot scale with input dimension since the embedding function can be applied to high dimensional input.
(4) Since the introduced modules are marginal w.r.t. the Set Transformer, I still doubt the novelty.
***2. Incomplete Analysis***
In the rebuttal, the author claimed
>if our method works better, it is because encoding that equivariance is useful.
(1) I didn’t see the logics of this discussion. There exist many factors to explain the increased performance, including neural architectures (particularly set transformer), model complexity and etc. (2) the meaning of kl divergence in table (1)/(2) is still not well discussed.
***3. High Runtime Complexity in Testing (Major Concern)***
(1) In Appendix Line 225-227, the proposed method runtime complexity is $\mathcal{O}(MN^2)$, while the runtime complexity for CNPs is $\mathcal{O}(N+M)$ in prediction. **Note that one motivation for developing CNPs is to reduce the inference complexity of Gaussian processes. **
(2) Considering the same runtime complexity, we cannot directly apply the Gaussian process with equivariant kernels to solve the problem, which should be more accurate in capturing equivariance. So what is the benefit of high runtime complexity in this work?
***4. Still Wrong Definition of Equivariance in Line72 Eq. (3)***
Note that the output in Eq. (3) is after the $r$’s transformation; however, there is no equivariant operation w.r.t. the $r$’s output. This violates the definition of equivariance. Also, this is not the standard definition of the ConvCNP paper.
---
Reply to Comment 1.1.1:
Title: Our Further Clarifications to Major Misunderstandings
Comment: Thank you for your detailed response, to which we respond below.
**1a. Set Transformer (Major Misunderstanding)**
We apologize for a truly unfortunate typo in our rebuttal (not our paper), which likely emerged while trimming our response. We meant to say "We use **Deep Sets** to achieve invariance to permutations of the context set (see line 128)." The citation in line 128 is the DeepSets paper. **We do not use Set Transformer.** This can be verified by checking our code submitted in the Supplement.
Set Transformers differ from DeepSets in that they add an attention mechanism which *learns* interactions between elements of the set. **Our method does not use attention.** Instead, we encode an *exact equivariance* via the comparison function $g$. Notably, this is an orthogonal point. $g$ can likely be applied with Set Transformers (instead of DeepSets) to augment our approach with attention, while at the same time encoding exact equivariances from the start. However, departing from the DeepSet architecture (as mentioned also by you later) would confuse our contribution and muddle the comparison with prior CNP work on exact equivariances, based on (Conv)DeepSets. Thank you for raising this point, which we will add to Related Works and Discussion, with references to the Set Transformer (Lee et al., 2019) and Transformer Neural Processes (Nguyen & Grover, 2022). Apologies for not having included these references before – while they do not deal with exact equivariances, they should definitely be discussed.
**1b. Learning Equivariances vs. Implementing Exact Equivariances (Major Misunderstanding)**
Overall, the crux seems to be a difference in views between "learning (approximate) equivariances" and "building exact equivariances in the network architecture". Our method does the latter.
We stand by our statement that no existing method is able to incorporate *exact* equivariances in the CNP architecture from the start (e.g., like CNNs incorporate exact translational equivariance), in a way which scales well with input dimension. We believe that there is a fundamental conceptual difference between learning approximate equivariances and incorporating specific exact biases directly in the network architecture. Incidentally, the two approaches are not opposed; we could likely incorporate some exact equivariances and let the network learn others.
**2. Analysis**
**We do not use Set Transformers.** Our architecture is based on DeepSets and a fixed comparison function $g$. We aimed to keep a similar complexity between different networks in terms of e.g. number of parameters. We are happy to answer more specific questions about the analysis.
The KL is computed, when possible, between the predictive map of the CNP $p(Y^*| X^*; X, Y)$ and the ground-truth posterior predictive map from the GP. We will include a clear definition, apologies that it was not explained before. This metric was used by e.g. Bruinsma et al. (2023).
**3. Runtime Complexity in Testing**
First, please note that the complexity for the diagonal RCNP (which, as we proved both in theory and practice, is *exact* for translational equivariance) is only $O(MN)$. This is worth considering given that translational equivariance is such an important property and plenty of CNP work on exact equivariances focuses *exclusively* on that (e.g., ConvCNP, Gordon et al. 2020; ConvGNP, Markou et al., 2022; FullConvGNP, Bruinsma et al., 2020). In practice, diagonal RCNPs can be *faster* at runtime than ConvCNP, their direct counterpart (see Table S1).
Second, CNPs are amortized while GPs are not. So even if the asymptotic complexity of FullRCNP is $O(N^2M)$, due to lack of amortization, GPs will need to be trained, which is $O(N^3)$. In many applications, the model receives data sequentially (e.g., BayesOpt), which means that the cost of GP training is applied repeatedly.
**4. Definition of Equivariance is Correct**
Please note that we are defining equivariance of the *prediction map*. Since a prediction map fully depends on its representation $r$, *we can define equivariance of the prediction map based on properties of $r$* (i.e., invariance), there is no contradiction. We will clarify this in the paper.
Our condensed proof is:
The standard definition of equivariance for a generic mapping is
$$\tau F_Z = F_{\tau Z}$$
Starting from the definitions of the ConvCNP paper (Property 2, Gordon et al., 2019):
$$\tau(X,Y) = (\tau X, Y), \qquad \tau f(X) = f(\tau^{-1} X)$$
We apply the defs. above to a prediction map, seen as the function that takes as input the context set $(X, Y)$ and outputs a function that takes as input the target set $X^*$:
$$P_{\tau(X,Y)} = \tau P_{(X,Y)} \Leftrightarrow \forall X^*, \ p(\cdot \mid r((\tau X,Y), X^*))= p(\cdot \mid r((X,Y),\tau^{-1}X^*))$$
Applying $\tau^{-1}$ to each side:
$$P_{\tau(X,Y)} = \tau P_{(X,Y)} \Leftrightarrow \forall X^*, r((X,Y),X^*) = r((\tau X,Y),\tau X^*)$$
which is our definition (Eq. 3). | Summary: This work introduces a new member of the neural process model family that is designed for biasing the model towards representing equivariances in the data. It does this by including relational information among the context set, and between the predicted and context inputs in the encoder for a new input. Also, only the relational information is used for encoding and the absolute information is discarded. Experiments are given for a several types of equivariances/applications to show the method successfully models equivariances in the data.
Strengths: Simple but effective technical idea with convincing experiments. For instance, I found Figure 1 very illuminating. The work shows promise for scaling GP-like models (or rather, emulating GP kernels) for distributions over functions to large data/dimensions.
Weaknesses: The full RCNP variant has tractability issues, although it is shown that the simpler RCNP using the diagonal elements of the relational matrix performs satisfactorily.
Equivariances other than translations and rigid transformations, and other variants of CNP are left to future work. Would it not be too much effort to examine a few of them in this paper?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Rather than inputting the full unweighted relational matrix to the embedding, would it be possible to learn which pairs of context set input/outputs and the value of their comparison function to attend to? In addition to weighting pairs of data points, some variant of attention could help break the quadratic scaling.
What is the cause of the jagged mean line for RCNP in Figure 1 (b), (e) relative to CNP and GPs?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: An honest assessment of the limitations is given in Section 7, including the point that the full RCNP model has quadratic scaling.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very positive comments and useful remarks about our work. We address your remarks and questions below.
### Weaknesses
> *The full RCNP variant has tractability issues, although it is shown that the simpler RCNP using the diagonal elements of the relational matrix performs satisfactorily.*
This is a fair point. As counterpoints to consider, we would like to remark that CNPs are commonly applied in the low-data setting (e.g., up to several hundred data points). Thus, while the quadratic scaling of full RCNPs is admittedly improvable, it is not as limiting as it might seem from its asymptotic analysis.
Moreover, as you mentioned, for the very common application of translational invariance, “diagonal” RCNPs are enough. Importantly, for the case of translational equivariance, the diagonal RCNP is not just an approximation of the full RCNP, but an exact solution. This is proved in our theorems (see Proposition 4.9 in the main text and its proof in Appendix B.2), and confirmed by our empirical results.
> *Equivariances other than translations and rigid transformations, and other variants of CNP are left to future work. Would it not be too much effort to examine a few of them in this paper?*
Good point. In terms of examining other variants of (R)CNP, we implemented and ran new experiments with the autoregressive RCNP (AR-RCNP), where the AR-CNP is a recent model introduced in Bruinsma et al. (2023). The AR-RCNP demonstrates the application of our technique to another model of the CNP family. Results are presented in Table 1 of the rebuttal PDF, and will be included in the revised paper.
Regarding other equivariances: we now also implemented equivariance to proper and improper rotations. Please refer to our answer to reviewer C6E4 for a detailed description of the newly implemented model, experiments and related discussion. We report a summary below.
The newly implemented equivariance is based on the comparison function:
$$g_\text{rot}(\mathbf{x},\mathbf{x}') = (\Vert \mathbf{x} - \mathbf{x}' \Vert_2, \Vert \\mathbf{x} \Vert_2, \Vert \mathbf{x}' \Vert_2),$$
which is based on the distance between points as well as their distance from the origin (the center of rotation). This makes the comparison function invariant to rotations and mirroring, but not translations. Thus, it will induce a FullRCNP which is equivariant to these transformations, according to our Proposition 4.5.
We tested this new RCNP variant in a new set of synthetic experiments which incorporate rotational and mirror symmetry. Preliminary results are presented in Table 2 of the rebuttal PDF, and full results will be included in the revised paper and Supplementary Material. Our results show that the FullRCNP is able to leverage the equivariances intrinsic to the task to outperform a standard CNP in both low and high dimensions.
In conclusion, our framework is suitable for isotropy, translational equivariance, and the newly implemented (proper and improper) rotational equivariance. Whether other equivariances can be expressed via our relational approach is an interesting direction for future work. Nonetheless, we believe the equivariances addressed in the paper represent a large class of useful equivariances. This is demonstrated by the fact that several key papers and models proposed in the neural process literature (as well as the broader machine learning literature) focus *only* on translational equivariance (e.g., ConvCNP, Gordon et al. 2020; ConvGNP, Markou et al., 2022; FullConvGNP, Bruinsma et al., 2020), with the limitations we discussed in the paper and that our proposed method overcomes.
### Questions
> *Rather than inputting the full unweighted relational matrix to the embedding, would it be possible to learn which pairs of context set input/outputs and the value of their comparison function to attend to? In addition to weighting pairs of data points, some variant of attention could help break the quadratic scaling.*
This is an interesting point and related to work on Transformer neural processes (Nguyen & Grover, 2022). However, we are unsure how this would address the scaling issue per se, since the attention mechanism is also notoriously quadratic in the size of the context set.
A somewhat similar idea, along these lines of finding a middle ground between the full RCNP and the diagonal RCNP, would consist of something like a low-rank approximation of the full comparison set, or by only comparing to a chosen subset of K "important" context points. This would relate to work in the Gaussian process literature (“sparse” GPs and inducing points), and also to the concept of coresets. Indeed, this is a potential direction of future work that we have been considering. Please see our response to reviewer N64F for additional remarks on this point.
> *What is the cause of the jagged mean line for RCNP in Figure 1 (b), (e) relative to CNP and GPs?*
- The jagged mean is a byproduct of the common ReLU activation function used in the (R)CNP architecture. Due to the ReLUs, the output of the network will naturally be a piecewise linear function (i.e., with discontinuous derivatives, which makes it look jagged).
- Standard CNPs here underfit the data, so their mean *appears* smoother. However, note that it is still jagged after zooming in – the discontinuous derivative is just less prominent due to the underfitting. The outputs of CNPs commonly look jagged (e.g., see Figure 2 of Garnelo et al. 2018).
- GPs are kernel methods, here with a Matérn 5/2 kernel which is twice differentiable, so the posterior mean function will also be twice differentiable (i.e., it will look reasonably smooth).
### References
- Bruinsma et al. (2023). Autoregressive Conditional Neural Processes. *ICLR*.
- Garnelo et al. (2018). Conditional Neural Processes. *ICML*.
- Nguyen & Grover (2022). Transformer neural processes: Uncertainty-aware meta learning via sequence modeling. *ICML*. | Summary: The paper presents a novel approach for incorporating equivariance into conditional neural processes (CNPs) which can scale to high dimensions. Modelling equivariance is essential to improve the performance of CNPs. Unlike previous approaches that use convolution and become impractical with increased input dimensions, this work uses relational information and discards absolute information. As a result, this simple method can handle high-dimensional inputs. The authors also prove that their approach is context preserving which means they do not lose other information. Their empirical results demonstrate that the proposed method is comparable to convolutional CNPs, GNPs etc. on a diversified range of tasks.
Strengths: **Strengths**
1. This paper addresses the challenge of incorporating equivariance into Conditional Neural Processes (CNPs) for high-dimensional problems. The proposed models are shown to be translation-equivariant, allowing them to scale to higher dimensions and are comparable/outperform existing CNP and GNP models on a wide range of tasks.
2. The paper is well-structured and clearly presented, with a strong motivation for the research, precise technical statements, and comprehensive background.
3. The paper provides robust theoretical results, all of which are supported by proof. The empirical investigation is extensive, covering a diversified range of tasks. The authors demonstrate the effectiveness of their models through experiments on synthetic Gaussian and non-Gaussian regression tasks, Bayesian optimization, Lotka-Volterra models and reaction-diffusion models.
Weaknesses: **Weakness**
Given the extensive use of RGNP in the experiments and its significant role in the paper's findings, it would still be beneficial for the authors to provide a concise description or a mathematical formulation of the RGNP in the main text. Even though it is not difficult to extend the mathematical description of RCNP to RGNP.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: **Questions**
1. I wonder if there is a middle ground between the full RCNP and the diagonal RCNP. In the matrix terminology, it is a bit similar to a low-rank approximation of the full comparison set. I think this would allow people to balance computational cost and expressivity. Also by only comparing only to a subset of K "important" context points, we can lower the cost to O(NK) rather than the quadratic cost of the full RCNP.
2. By only using pairwise comparison in the model construction, I wonder whether we would lose high-order interaction which involves more than two context points.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have discussed the limitations of their work and I have no concerns of any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your very positive and insightful comments, we are glad to see that you find our paper particularly well-suited for NeurIPS. In the following we address your questions and points raised.
### Weaknesses
> *Given the extensive use of RGNP in the experiments and its significant role in the paper's findings, it would still be beneficial for the authors to provide a concise description or a mathematical formulation of the RGNP in the main text. Even though it is not difficult to extend the mathematical description of RCNP to RGNP.*
Thanks for pointing this out. In the revised paper we will include a concise mathematical description of the RGNP model in the main text.
### Questions
> *1. I wonder if there is a middle ground between the full RCNP and the diagonal RCNP [...] similar to a low-rank approximation of the full comparison set. [...] Also by only comparing only to a subset of K "important" context points [...].*
This is a very interesting and insightful suggestion to lower the asymptotic cost of full RCNPs, which would be related to other work in the Gaussian process literature (“sparse” GPs and inducing points), and also to the concept of coresets. Indeed, this is a potential direction of future work aimed at reducing the computational complexity.
A naive baseline to consider in this direction would be to only encode a target point based on the $K$ closest context points, thus reducing the cost of FullRCNP to $O(K^2 M + N M)$, where the first term is the relational encoding and the second term is the naive cost of computing the distances between $M$ targets and $N$ context points (although the latter can be sped up via smarter nearest-neighbor search algorithms; see e.g. Hyvönen et al. 2022). While $K$ is nominally a constant, to guarantee a reasonable performance in practice we envision it could be chosen based on the (maximum) number of context points in the task. Still, by choosing a square-root scaling ($K \propto \sqrt{N}$) we can recover a manageable asymptotic cost of $O(N M)$.
> *2. By only using pairwise comparison in the model construction, I wonder whether we would lose high-order interaction which involves more than two context points.*
This is an excellent point and the perhaps surprising theoretical answer is no, in our construction. Our context-preservation theorems demonstrate that no information is lost about the entire context set, so in principle a RCNP with a sufficiently large network is able to reconstruct any high-order interaction of the context set (see Section 4.2 in the main text and the full proof in Appendix B.2).
However, in practice the answer can be somewhere in the middle, in that the chosen representation is built on two-point interactions, so depending on the network size it may be harder for the network to effectively encode the simultaneous interaction of many context points.
We will add these informative remarks in the theoretical section of the Supplement.
### References
- Hyvönen et al. (2022). A Multilabel Classification Framework for Approximate Nearest Neighbor Search. *NeurIPS*.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. My questions have been addressed and I strongly support the acceptance of this paper. | Summary: The authors propose a class of neural processes that can be constructed to enforce invariance to particular properties like translation and rotation.
Strengths: - As far as I know, the proposed architecture and technique for enforcing invariances in Section 3 is novel.
- Improvements are shown over standard neural processes on a range of tasks
- The authors show both theoretically and empirically that the proposed architecture does indeed enforce the invariances described.
Weaknesses: - Choosing $g$ in Eq 5 to enforce a particular invariance seems difficult. The authors provide choices that enforce isotropy and translation invariance, but it would not be obvious how to enforce a different type of invariance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review and comments, and we are glad to see that you found our paper interesting.
Regarding your concern about enforcing particular invariances, we would like to comment that it is indeed true that our method only applies to equivariances that can be enforced via a comparison function $g$. This is explicitly mentioned within the paper, such as the Introduction:
> Our proposed method works for equivariances that can be expressed relationally via comparison between pairs of points (e.g., their difference or distance); [...]
and the Discussion (just before the *Limitations* paragraph):
> Our method applies to equivariances that can be induced via an appropriate comparison function; here we focused on translational equivariances (induced by the difference comparison) and equivariances to rigid transformations (induced by the distance comparison).
To further address your point and provide an additional example, we ran a new set of experiments with a FullRCNP which implements equivariance to proper and improper rotations (i.e., rotations + mirroring). As per Proposition 4.5 of our paper, we need to select an adequate comparison function, which we define as follows:
$$g_\text{rot}(\mathbf{x},\mathbf{x}') = (\Vert \mathbf{x} - \mathbf{x}' \Vert_2, \Vert \\mathbf{x} \Vert_2, \Vert \mathbf{x}' \Vert_2)$$
This comparison function is based on the distance between points as well as their distance from the origin (the center of rotation). This representation makes the comparison function invariant to rotations and mirroring, but not translations. Thus, it will induce a FullRCNP which is equivariant to these transformations.
As mentioned in the global rebuttal, we tested this new RCNP variant in a new set of synthetic experiments, namely a regression task involving a Gaussian Process whose mean and covariance functions introduce rotational and mirror symmetry, but not translational symmetry. Such a task is representative for example of a physical model set in a potential well (e.g., a point charge or mass in the origin). Due to the limited time available for the rebuttal, we present preliminary results in Table 2 of the rebuttal PDF. Full results will be included in the revised paper and Supplementary Material. Briefly, our results show that the FullRCNP is able to leverage the equivariances intrinsic to the task to outperform a standard CNP in both low and high dimensions.
In conclusion, our relational neural process framework is suitable for isotropy, translational equivariance, and the newly implemented (proper and improper) rotational equivariance. Whether other equivariances can be expressed via our relational approach is an interesting direction for future work. Nonetheless, we believe the equivariances addressed in the paper represent a large class of useful equivariances. This is demonstrated by the fact that several key papers and models proposed in the neural process literature (as well as the broader machine learning literature) focus *only* on translational equivariance (e.g., ConvCNP, Gordon et al. 2020; ConvGNP, Markou et al., 2022; FullConvGNP, Bruinsma et al., 2020), with the limitations we discussed in the paper and that our proposed method overcomes.
For clarity, we will include an additional explanation based on the paragraph above to the *Limitations* section of the paper to make the contribution and limitations of the paper clearer.
### References
- Bruinsma et al. (2020). The Gaussian Neural Process. *AABI*.
- Gordon et al. (2020). Convolutional Conditional Neural Processes. *ICLR*.
- Markou et al. (2022). Practical Conditional Neural Processes via Tractable Dependent Predictions. *ICLR*. | Rebuttal 1:
Rebuttal: We thank the anonymous reviewers for their comments and suggestions for improving our paper. We are glad to see that the majority of reviewers found the paper interesting and of impact. We provide clarifications and detailed answers to perceived weaknesses and raised questions in our individual responses.
In particular, we added a number of requested experiments and comparisons to baselines in different settings to the paper. Tables and Figures presenting these results can be found in the attached pdf. Please find below a short overview of the implemented changes:
### New experiments and results
- As asked by reviewer jSJf, we implemented and ran new experiments with another variant of RCNP – the autoregressive RCNP (AR-RCNP), where the AR-CNP is a recent model introduced in Bruinsma et al. (2023). The AR-RCNP demonstrates the application of our technique to another model of the CNP family. Results are presented in Table 1 of the rebuttal PDF, and will be included in the revised paper.
- As suggested by reviewers C6E4 and jSJf, we provided the RCNP implementation of a new class of equivariances, namely proper and improper rotations. We tested this new RCNP variant in a new set of synthetic experiments, a regression task involving a Gaussian Process whose mean and covariance function introduce rotational and mirror symmetry, but not translation. Such a task is representative for example of a physical model with a potential well. Due to the limited time available for the rebuttal, we present preliminary results in Table 2 of the rebuttal PDF. Full results will be included in the revised paper and Supplementary Material. Please also see our response to reviewer C6E4 for further details.
- As requested by reviewer bQ2F, we ran our RCNP model on an image completion task using classic image datasets such as MNIST and CelebA, to compare our performance to other members of the CNP family. Due to the limited time available for the rebuttal, we could only complete preliminary experiments with downscaled images. Results are presented in Table 3 of the rebuttal PDF, with examples in Figure 1 of the PDF, and will be included in the revised Supplementary Material.
### Other changes
Besides the new results explained above, we implemented a number of changes to the main text and Supplementary Material to further improve the clarity of the paper.
- We clarified our motivation and contributions (see our response to reviewer bQ2F for details).
- We expanded the Related Works section with new references and explained more in detail how our work differs from the existing literature on equivariant (C)NPs (please see our answer to reviewer bQ2F). Importantly, we highlighted how all existing works require a lattice (regular grid) construction in the input space, often followed by convolutions, which strongly limits scalability of the methods to higher input dimensions; a limitation which is not shared by our distinct “relational” approach.
- We expanded our limitations section, to better highlight that our relational approach hinges on finding an appropriate comparison function $g$. While our paper covers several important equivariances of large practical interest (translational, isotropy, now rotations), whether other equivariances can be expressed via our relational approach is left for future work. For details, see the response to reviewer C6E4.
- In addressing a remark by reviewer bQ2F, we clarified our definition of equivariance of prediction maps, based on the invariance of their representation (Eq. 3 in the paper). Our definition, used for its convenience in our paper, is mathematically equivalent to the more common definition of equivariance for general mappings; $\tau f(X) = f(\tau X)$, but the comment made us realize it is not immediately apparent. We added a section in the Supplementary Material explicitly showing this equivalence, starting from standard definitions of equivariance and group action (Gordon et al., 2020; Holderrieth et al., 2021).
### References
- Bruinsma et al. (2023). Autoregressive Conditional Neural Processes. *ICLR*.
- Gordon et al. (2020). Convolutional Conditional Neural Processes. *ICLR*.
- Holderrieth et al. (2021) Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes. *ICML*.
Pdf: /pdf/e01eed16dcd630c3545ce8e76fd926fcf8a3e0b8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
For SALE: State-Action Representation Learning for Deep Reinforcement Learning | Accept (poster) | Summary: This paper proposes a new RL algorithm called TD7, which is based on TD3, and adopts additional techniques including (1) learning state-action representations, (2) LAP prioritized replay, (3) a behavior cloning term in the learning objective, and (4) checkpoints, where (3) and (4) are applied exclusively to the offline RL setting. They demonstrate the superior empirical performance of TD7 on the D4RL benchmark.
Strengths: 1. The paper is well written and easy to follow.
2. Extensive experiments and ablation studies to evaluate the proposed methods.
3. The proposed method is shown to have strong empirical performance on established benchmarks.
Weaknesses: 1. **Limited contribution**: While I appreciate TD7 as a competent algorithm for both online and offline RL, I'm not so sure about the contribution of SALE especially regarding its applicability to other existing baselines. It doesn't seem conclusive to me that the proposed embedding is generally superior for RL agents. From the ablation studies it seems many independent design choices could significantly affect the final performance, which may also be algorithm dependent.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could you provide more evidence that shows SALE's applicability to other existing RL methods?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are not explicitly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and the feedback.
**Combining SALE with other algorithms:** Thank you for bringing up this concern. In the table below we have included results with SALE applied to SAC. Learning curves are included in the PDF of the general response (Figure 2). No modifications or hyperparameter changes were made to either SALE or SAC, we simply applied SALE directly. We hope this addresses your primary concern!
| | SAC + SALE | SAC |
|-------------|---------------------|--------------------|
| HalfCheetah | **17316 $\pm$ 236** | 15526 $\pm$ 697 |
| Hopper | 2842 $\pm$ 951 | **3167 $\pm$ 485** |
| Walker2d | **6887 $\pm$ 306** | 5681 $\pm$ 329 |
| Ant | **9288 $\pm$ 462** | 4615 $\pm$ 2022 |
| Humanoid | **9062 $\pm$ 335** | 6555 $\pm$ 279 |
Results are for 5M time steps, and 10 seeds. $\pm$ captures a 95% confidence interval.
This is further evidence that while there are many design choices to be made, we are presenting a set of design choices which are robust to algorithm and task. Regardless, it is our hope that by presenting an in-depth empirical study, readers can determine which choices are more important than others, and can start with a good default set.
**Limitations:** As stated in the checklist, these are included in Appendix J. Sorry for the misunderstanding, space was a concern in the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have increased my rating as my main concern on the compatibility with existing baselines has been addressed. | Summary: The paper introduces an approach dubbed SALE for learning embeddings that model the interaction between state and action in low-level state environments. The authors extensively study the design space of these embeddings and integrate SALE into the TD3 algorithm along with 3 other components to form a new algorithm, TD7.
They perform an extensive empirical evaluation over the design space to discover the most significant contributors to final performance. The paper shows that TD7 outperforms existing continuous control algorithms on MuJoCo OpenAI gym tasks.
Strengths: **Originality**
The paper's originality lies in its comprehensive study of learning embeddings that model the interaction between state and action in low-level state environments. The experiments conducted provide evidence of the effectiveness of their approach in improving the performance of TD3.
**Quality**
See Weaknesses Section.
**Clarity**
The paper is well-written and organized, making it easy for readers to follow the authors' thought process and understand the methodology and results.
**Significance**
The authors demonstrate that their proposed method, when integrated into TD3 to form what they name TD7, performs significantly better than TD3 for the MuJoCo environments. This work brings to light the importance of learning the interaction between state and action information to improve performance in RL.
Weaknesses: While the paper presents a novel approach to learning embeddings for state and action information for RL, there are areas where it could be improved.
Firstly, the evaluation of TD7 could be expanded to include more benchmarks. While the MuJoCo environments provide valuable data, including additional benchmarks such as Procgen or Brax could offer a more comprehensive understanding of the method's performance and any potential limitations.
These considerations, while not detracting from the originality and potential significance of the method, do highlight areas where further work could be beneficial.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See Weaknesses Section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Is part of this review missing?**
We just wanted to double check that this review is complete. It currently reads as if some of the weaknesses were mistakenly deleted. If some other weaknesses did indeed get removed, let us know so we can improve the paper accordingly. Regardless, thank you for your time and the review. We address your comments below.
**TD7 on other benchmarks:** To address this concern, we have collected results for TD7 on the DMC suite. As shown in the table below, TD7 consistently maintains its advantage over TD3 in DMC as well. In the attached PDF of the general response, we include the full learning curves for these environments (Figure 1). We hope this addresses your primary concern!
| | TD7 | TD3 |
|-------------------------|---------------------|--------------------|
| Ball in Cup Catch | **982 $\pm$ 1.8** | 981 $\pm$ 3.3 |
| Cartpole Balance Sparse | **1000 $\pm$ 0.0** | **1000 $\pm$ 0.0** |
| Cartpole Swingup | **875 $\pm$ 5.8** | 871 $\pm$ 6.6 |
| Cheetah Run | **914 $\pm$ 2.9** | 866 $\pm$ 23.5 |
| Finger Spin | **989 $\pm$ 1.4** | 975 $\pm$ 14.1 |
| Finger Turn Hard | **742 $\pm$ 136.8** | 552 $\pm$ 80.7 |
| Hopper Stand | **774 $\pm$ 177.4** | 480 $\pm$ 180.8 |
| Hopper Hop | **175 $\pm$ 98.7** | 58 $\pm$ 42.3 |
| Humanoid Stand | **504 $\pm$ 336.5** | 7 $\pm$ 0.4 |
| Humanoid Walk | **389 $\pm$ 157.6** | 36 $\pm$ 72.4 |
| Walker Stand | **991 $\pm$ 1.5** | 985 $\pm$ 3.1 |
| Walker Walk | **981 $\pm$ 2.7** | 965 $\pm$ 5.2 |
| Walker Run | **807 $\pm$ 62.5** | 631 $\pm$ 74.7 |
| Quadruped Walk | **950 $\pm$ 4.9** | 750 $\pm$ 168.9 |
| Quadruped Run | **684 $\pm$ 87.5** | 503 $\pm$ 193.7 |
| Mean | **784** | 644 |
| Median | **875** | 750 |
| IQM | **856** | 733 |
Results are for 1M time steps and 10 seeds. $\pm$ captures a 95% confidence interval.
---
Rebuttal Comment 1.1:
Comment: The results for TD7 in comparison to TD3 are compelling and offer a clearer picture of the algorithm's performance across a diverse set of tasks.
Given the updated information and the detailed results you've shared, I am pleased to increase my score. | Summary: This work introduces a novel state-action representation learning framework SALE and two other techniques (e.g. checkpointing, a new type of Q value clipping) that substantially improve the data efficiency and final performance of TD3 in online and offline RL.
Strengths: 1. The work studies joint state-action representation learning, which is under-studied with respect to state representation learning.
2. The method is very clear, and the authors provide extensive empirical analysis and ablation explaining their design choices.
3. Some techniques introduced in this work (checkpointing, Q clipping in Eq. 6) can be applied to most RL methods and are thus influential beyond representation learning.
Weaknesses: **Core comments:**
1. I believe the paper would read more easily after reorganization.
* Section 4.2 feels like an ablation study that should follow the core empirical results. Also, Section 4.2 references TD7 results, though TD7 is not introduced until section 5.
* Section 5 begins by mentioning stability and extrapolation error jumped out at me; the intro and related work emphasize the representation learning side of this work much more, and extrapolation error is not mentioned in sections 1-4. I believe the paper would read better if the authors painted a clearer picture of what the stability side of the work entails in the introduction, and included transitions between section 4 and 5 that clearly indicate “we talked about one issue, and now we’re going to talk about another.”
* Transition sentences before bolded subheadings (e.g. Normalizing embeddings, Fixed embeddings) would help in Section 4 – something to the effect of “We now discuss two important aspect of SALE…”
1. The checkpoint policy $\pi_c$ denotes the policy is the largest minimum return during the assessment phase. During agent evaluation, you compute the return over N rollouts of $\pi_c$ rather than $\pi_t$, correct? If so, I’m a bit skeptical of the results; is it possible that the perceived benefits of checkpointing are simply due to the batched nature of training it requires? Suppose you run vanilla TD3 within the checkpointing framework but perform separate evaluations using the checkpoint policy *and* current policy. This experiment would be equivalent to vanilla TD3 with variable length batched updates. Does the batched updates alone improve performance?
**Minor comments:**
1. Figure 2 would be more readable with error bars rather than text to indicate the 95% confidence interval.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. In principle, any latent state/action RL framework can be modified such that $Q$ and $\pi$ take both the original state/action and the latent state/action as inputs. Would it be fair to say the ablations in Fig. 2 suggests that any state/action representation learning method should have $Q$ and $\pi$ depend on both the original and latent state/actions?
1. Fig. 2: Can the authors clarify what is meant by percent loss here? Are these quantities all with respect to default values? Where do the default values fall on these figures?
1. The authors mention that checkpointing harms performance early in training – is this because the policy changes rapidly early on? Can the authors provide more detail?
1. Line 217: How does checkpointing change the relevance of data?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and helpful comments.
**Reorganization and writing:** Thank you for bringing this up and providing concrete suggestions. We will add transition statements and expand the introduction to clearly establish the challenges associated with our section on stability. We will also look at moving section 4.2 to a later part of the paper, although it is harder to promise this change without first doing so and making sure the flow of the paper still makes sense.
**Does the batched updates alone improve performance?** Great question. We show this result for TD7 in Figure 25 and Table 13 in Appendix G, where “current policy” refers to TD7 trained in an identical fashion but we evaluate with the current policy rather than the checkpoint policy. The result only goes to 1M time steps however, so in the table below we present results for the full 5M time steps.
| | TD7 (checkpoints, checkpoint policy) | TD7 (checkpoints, current policy) | TD7 (no checkpoints) |
|-------------|--------------------------------------|-----------------------------------|----------------------|
| HalfCheetah | 18165 $\pm$ 255 | 18214 $\pm$ 237 | **18328 $\pm$ 331** |
| Hopper | **4075 $\pm$ 225** | 3690 $\pm$ 562 | 3851 $\pm$ 372 |
| Walker2d | **7397 $\pm$ 454** | 7258 $\pm$ 469 | 6519 $\pm$ 209 |
| Ant | 10133 $\pm$ 966 | 9807 $\pm$ 730 | **10388 $\pm$ 1024** |
| Humanoid | **10281 $\pm$ 588** | 10157 $\pm$ 560 | 9521 $\pm$ 820 |
Additionally, we ran the suggested experiment for TD3. The results are presented in the table below. Full learning curves are included in the PDF of the general response (Figure 3). For both tables, the results are for 5M time steps and 10 seeds. $\pm$ captures a 95\% confidence interval. While there are some performance benefits for some environments, the results do not suggest that the batched updates are the main reason for the performance benefit.
| | TD3 (checkpoints, checkpoint policy) | TD3 (checkpoints, current policy) | TD3 (no checkpoints) |
|-------------|--------------------------------------|-----------------------------------|----------------------|
| HalfCheetah | 14075 $\pm$ 1064 | 13946 $\pm$ 1022 | **14337 $\pm$ 1491** |
| Hopper | 3366 $\pm$ 362 | 3285 $\pm$ 578 | **3682 $\pm$ 83** |
| Walker2d | **5189 $\pm$ 405** | 5126 $\pm$ 386 | 5078 $\pm$ 343 |
| Ant | **6168 $\pm$ 677** | 6149 $\pm$ 713 | 5589 $\pm$ 758 |
| Humanoid | **5785 $\pm$ 154** | 5534 $\pm$ 136 | 5433 $\pm$ 245 |
**Questions:**
1. **Including the original input:** Correct! This is what we found during our testing (and shown in Figure 2). The input to TD7’s critic includes both the embeddings $z^{sa}$ and $z^s$ (latent state action) as well as the original state and action. The input to the actor includes the embedding $z^s$ and the original state.
2. **Percent loss:** Percent loss refers to $\frac{\text{performance of modification} - \text{performance of TD7}}{\text{performance of TD7}} \times 100$. TD7 uses all the default choices. The default choices would be 0 in all cases. We will include these details in the next version of the paper.
3. **Early training with checkpoints:** Experimentally, we looked at this in Table 11 in Appendix F. Your intuition is correct, in that the policy (and corresponding data) improves very rapidly early in learning so batching the training means the data will be more distant from the behavior of the current policy (distribution shift), and corresponds to a lower reward.
4. **Relevance of data:** By relevance we mean how much the data corresponds to the current policy (how much distribution shift there is). In the standard training paradigm, after every training step, a new data point is collected. However, using checkpoints means that the training gets batched, so as the policy trains, before collecting the next batch, it may select actions which are not yet contained in the dataset. We will clarify this in the next version of the paper. Empirically we did not find this problematic for 20-50 episodes (after early learning stages), but presumably if this number was set to very large values (say, 1000), then the training paradigm would resemble offline RL and would likely negatively affect performance.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I want to thank the authors for providing thorough responses to my comments (as well as other reviewer comments). All of my comments have been addressed. I maintain my score, and would advocate for accepting this paper.
Reviewers Casd and FeJr expressed concern about the work’s novelty, though I do not share these concerns. Joint state-action representation learning is under-explored compared to just state or action representation learning, and this work provides a thorough analysis of relevant design choices missing in the existing literature.
Reviewer sfFv noted that the proposed improvements – when considered individually – are incremental. I think this is a valid comment regarding target Q clipping (Eq. 6) and checkpointing (Sec. 5.2), but the core contribution of this work is the novel state-action representation learning method and its analysis. While target Q clipping and checkpointing can in principle be applied to many RL algorithms without SALE, it serves a particular purpose in the context of SALE which the authors mention in section 5.1. The policy and Q function are more prone to extrapolation error because SALE increases the dimensionality of their inputs, and these techniques help mitigate this effect.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for advocating for the paper. We agree with your comments and appreciate your support. | Summary: This paper proposes TD7, an improved version of the popular TD3 algorithm with 4 additional techniques: state-action learned embeddings (SALE, the major one among the four), using policy checkpoints for stable evaluation, an existing prioritized experience replay method called LAP, an existing offline RL algorithm called TD3+BC. The four techniques strengthen vanilla TD3 algorithm from different aspects. TD7 is evaluated on MuJoCo and D4RL benchmarks for online and offline RL settings, significantly outperforming existing continuous control algorithms. Comprehensive ablation studies are also included.
Strengths: - This paper is well written and the content is clear.
- The proposed improvements (almost) make sense to me and well motivated or explained.
- The part of SALE (Section 4) is interesting to me, although SALE can be viewed as an improvement over OFENet. I think the detailed studies on useful techniques for stable representation learning could be inspiring to other similar problems in RL.
- This paper conduct ablation studies for most design choices to rule out the alternative choices empirically.
- The code implementation is neat and I assume it is easy to reproduce the experiments.
Weaknesses: - The proposed improvements are piecemeal, each of which is incremental or existing.
- A few additional hyperparameters (e.g., the dimensionality of $z^{s,a}$, the episode number to maintain the policy unchanged) are introduced, although the authors give recommended values.
- Although the paper has comprehensive empirical studies, intuitive explanations are lacked.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: TD7 needs to keep current policy fixed for several episodes:
1. What if the episode horizon is long?
2. Will this be inconvenient to combine high update frequency (like used in REDQ)?
3. How to consider the gap between using the checkpoint policy for evaluation (exploitation) while the comparison and selection of checkpoint policies are based on the performance of exploration policies?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The proposed improvements seems to be general and orthogonal to different continuous control RL algorithms. However, SALE is not combined with SAC, TQC and REDQ to evaluate the generality.
- TD7 is evaluated on MuJoCo and D4RL whose backends are both OpenAI MuJoCo suite. I would be interested in seeing TD7 in other continuous control benchmarks like DMC.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and considerate questions.
**Intuitive explanations:** Space was obviously a bit of a concern with the paper (and fortunately the camera-ready allows for an additional page). We would be happy to expand on some of the intuition/reasoning behind our improvements with this additional space. We are happy to discuss details during the rebuttal phase if there are any specific improvements you have questions about.
**Checkpoint questions:** It’s worth highlighting that checkpoints are not a fundamental aspect of TD7, as they are easy to remove. We ablate over the usage of checkpoints in Figure 5 and more extensively in Appendix F. Notably, we still outperform existing state-of-the-art methods without the use of checkpointing. In situations where checkpoints are problematic, there does exist the option to not use them. However to answer your questions:
- **Long episode horizon:** This is a good thought. Ultimately any method which relies on any kind of MC estimate could have problems for long horizons. There are some possible modifications to checkpoints such as only considering a finite/shorter horizon when evaluating the checkpoint, or similarly, accounting for the discount factor, or simply evaluating with fewer episodes. Another possibility is to use an off-policy evaluation method rather than using raw values. Ultimately there’s a lot of room for new research for unique settings.
- **Combining with high frequency updates:** We don’t believe this is an issue, although there is some potential for conflict if the number of updates is very significant, say 1000x, then training approaches the offline setting. For smaller increases (say 2-5x), our empirical analysis (in Appendix F) did not suggest that there were any stability or training issues with training for a large number of updates before collecting new data. In practice, since our method trains after an episode finishes (rather than during the episode), it could be argued that our method is better suited for expensive training methods. This is because training the networks during an episode could cause practical issues with latency.
- **The gap between the checkpoint policy and the exploration policy:** Great question! We don’t really cover this in the main paper due to a lack of space, but discuss it more in Appendix F. In practice, we use the minimum performance over N episodes, rather than the mean (quantiles could also be used for particularly stochastic environments). Since our goal is stability, the idea is that by using the minimum we put a much higher preference on robust policies. Following that logic, the performance gap between the exploration policy (which includes noise) and the true deterministic policy just helps us choose more robust policies. Again, there are some other possible research questions which arise depending on the ultimate goal from using checkpoints but we leave that to future work.
We agree with the reviewer that these are all interesting thoughts. We believe that checkpoints are a promising avenue for research and we hope to explore some of these ideas in future work.
**Combining SALE with other algorithms:** Good point! We present results of SAC with SALE compared against vanilla SAC. SALE significantly improves the results of SAC in the majority of the environments. This change only represents the addition of SALE, and with no underlying hyperparameter changes to either SAC or SALE. Since TQC and REDQ are both based on SAC, this shows that SALE could offer performance benefits to those methods as well. Learning curves are included in the PDF of the general response. Results are for 5M time steps and 10 seeds. $\pm$ captures a 95\% confidence interval.
| | SAC + SALE | SAC |
|-------------|---------------------|--------------------|
| HalfCheetah | **17316 $\pm$ 236** | 15526 $\pm$ 697 |
| Hopper | 2842 $\pm$ 951 | **3167 $\pm$ 485** |
| Walker2d | **6887 $\pm$ 306** | 5681 $\pm$ 329 |
| Ant | **9288 $\pm$ 462** | 4615 $\pm$ 2022 |
| Humanoid | **9062 $\pm$ 335** | 6555 $\pm$ 279 |
**TD7 on other benchmarks:** We have gathered results on the DMC suite. As shown in the table below, TD7 consistently maintains its advantage over TD3 in DMC as well. In the attached PDF of the general response, we include the full learning curves for these environments. Results are for 1M time steps and 10 seeds. $\pm$ captures a 95\% confidence interval.
| | TD7 | TD3 |
|-------------------------|---------------------|--------------------|
| Ball in Cup Catch | **982 $\pm$ 1.8** | 981 $\pm$ 3.3 |
| Cartpole Balance Sparse | **1000 $\pm$ 0.0** | **1000 $\pm$ 0.0** |
| Cartpole Swingup | **875 $\pm$ 5.8** | 871 $\pm$ 6.6 |
| Cheetah Run | **914 $\pm$ 2.9** | 866 $\pm$ 23.5 |
| Finger Spin | **989 $\pm$ 1.4** | 975 $\pm$ 14.1 |
| Finger Turn Hard | **742 $\pm$ 136.8** | 552 $\pm$ 80.7 |
| Hopper Stand | **774 $\pm$ 177.4** | 480 $\pm$ 180.8 |
| Hopper Hop | **175 $\pm$ 98.7** | 58 $\pm$ 42.3 |
| Humanoid Stand | **504 $\pm$ 336.5** | 7 $\pm$ 0.4 |
| Humanoid Walk | **389 $\pm$ 157.6** | 36 $\pm$ 72.4 |
| Walker Stand | **991 $\pm$ 1.5** | 985 $\pm$ 3.1 |
| Walker Walk | **981 $\pm$ 2.7** | 965 $\pm$ 5.2 |
| Walker Run | **807 $\pm$ 62.5** | 631 $\pm$ 74.7 |
| Quadruped Walk | **950 $\pm$ 4.9** | 750 $\pm$ 168.9 |
| Quadruped Run | **684 $\pm$ 87.5** | 503 $\pm$ 193.7 |
| Mean | **784** | 644 |
| Median | **875** | 750 |
| IQM | **856** | 733 | | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their comments, suggestions for improvement, and interest in the paper.
Overall, there were two main comments that were repeated among the reviewers which were:
**Does TD7 work on other benchmarks?** To answer this, we ran TD7 on 15 new environments from the DeepMind Control suite and benchmarked it against TD3 (**Figure 1**). The final results at 1M time steps are shown below.
| | TD7 | TD3 |
|-------------------------|---------------------|--------------------|
| Ball in Cup Catch | **982 $\pm$ 1.8** | 981 $\pm$ 3.3 |
| Cartpole Balance Sparse | **1000 $\pm$ 0.0** | **1000 $\pm$ 0.0** |
| Cartpole Swingup | **875 $\pm$ 5.8** | 871 $\pm$ 6.6 |
| Cheetah Run | **914 $\pm$ 2.9** | 866 $\pm$ 23.5 |
| Finger Spin | **989 $\pm$ 1.4** | 975 $\pm$ 14.1 |
| Finger Turn Hard | **742 $\pm$ 136.8** | 552 $\pm$ 80.7 |
| Hopper Stand | **774 $\pm$ 177.4** | 480 $\pm$ 180.8 |
| Hopper Hop | **175 $\pm$ 98.7** | 58 $\pm$ 42.3 |
| Humanoid Stand | **504 $\pm$ 336.5** | 7 $\pm$ 0.4 |
| Humanoid Walk | **389 $\pm$ 157.6** | 36 $\pm$ 72.4 |
| Walker Stand | **991 $\pm$ 1.5** | 985 $\pm$ 3.1 |
| Walker Walk | **981 $\pm$ 2.7** | 965 $\pm$ 5.2 |
| Walker Run | **807 $\pm$ 62.5** | 631 $\pm$ 74.7 |
| Quadruped Walk | **950 $\pm$ 4.9** | 750 $\pm$ 168.9 |
| Quadruped Run | **684 $\pm$ 87.5** | 503 $\pm$ 193.7 |
| Mean | **784** | 644 |
| Median | **875** | 750 |
| IQM | **856** | 733 |
Results are for 1M time steps and 10 seeds. $\pm$ captures a 95% confidence interval. Our results conclusively demonstrate that the improvements we have proposed to TD3 allow TD7 to consistently outperform it on this new benchmark of varied tasks. Between these new results, our online MuJoCo results, and our offline D4RL results, we hope that this convinces the reviewers that TD7 (and SALE) provides a consistent and meaningful improvement over TD3 in a wide range of tasks and settings.
**Does SALE work with other RL algorithms?** To answer this, we applied SALE to SAC, making no hyperparameter changes or additional modifications to SALE or SAC (**Figure 2**). The final results at 5M time steps are shown below.
| | SAC + SALE | SAC |
|-------------|---------------------|--------------------|
| HalfCheetah | **17316 $\pm$ 236** | 15526 $\pm$ 697 |
| Hopper | 2842 $\pm$ 951 | **3167 $\pm$ 485** |
| Walker2d | **6887 $\pm$ 306** | 5681 $\pm$ 329 |
| Ant | **9288 $\pm$ 462** | 4615 $\pm$ 2022 |
| Humanoid | **9062 $\pm$ 335** | 6555 $\pm$ 279 |
Results are for 5M time steps and 10 seeds. $\pm$ captures a 95% confidence interval. On 4 of the 5 tasks, SAC+SALE significantly outperforms vanilla SAC. Since most off-policy RL algorithms are based on either TD3 or SAC, we hope that this demonstrates that SALE can be applied more widely than just TD3.
**Additional experiments on checkpoints:** Some of the reviewers had some additional thoughts on checkpoints that we thought could be best answered with additional visualization.
In **Figure 3** we looked to answer a question posed by reviewer YfJA, which was: how does the change to batched training affect performance? This figure shows the performance of the checkpoint policy taken from TD3 trained with checkpoints, the current policy obtained simultaneously from TD3 trained with checkpoints, and vanilla TD3. The result shows that batching training can slightly improve performance in some cases (Ant), but largely has limited impact on performance.
This raises another question (which was asked by reviewer Casd), which is: how does using checkpoints contribute? In **Figure 4** we show the performance of 5 individual trials (10 trials is too visually cluttered), with checkpoints and without checkpoints. What we see is that while the average performance of the current policy is similar to the checkpoint policy, the stability of the checkpoint policy is much higher. For those who have not examined individual trials before, the instability of the current policy of TD7 may be surprising, however this is consistent with other algorithms as well. In Appendix F, Figure 18, we plot the learning curve of a single seed of TD3.
This stability benefit is perhaps better understood numerically. In the following table we present the average standard deviation over the last 20 evaluations (corresponding to the final 100k time steps of training). In brackets we include the [min performance, max performance] over those evaluations.
| | TD7 (checkpoints) | TD7 (no checkpoints) | TD3 (checkpoints) | TD3 (no checkpoints) |
|-------------|----------------------|----------------------|--------------------|----------------------|
| HalfCheetah | 14, [18149, 18202] | 227, [17700, 18671] | 98, [13745, 14118] | 319, [13404, 14665] |
| Hopper | 28, [4025, 4132] | 573, [2200, 4224] | 121, [3109, 3571] | 344, [2412, 3727] |
| Walker2d | 83, [7108, 7408] | 607, [4315, 6685] | 26, [5124, 5213] | 538, [3159, 5270] |
| Ant | 440, [8763, 10223] | 609, [8748, 10676] | 196, [5568, 6280] | 716, [3287, 5709] |
| Humanoid | 422, [9414, 10808] | 1251, [5213, 10501] | 149, [5335, 5882] | 158, [5045, 5658] |
What we see is that checkpoints significantly reduce the variability of performance between evaluations, and improve the worst case performance.
We hope this addresses the concerns of the reviewers and are happy to discuss further.
Pdf: /pdf/5e4c896e4a10498b6f071ca300e135922057af11.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a couple of ideas to improve the empirical performance of the TD3 algorithm on continuous-action RL problems.
The core contribution is to show that learning state and action embeddings that are designed to predict themselves in successive timestep can help achieve more reward. This is pretty nice because the representation learning process is done in conjunction with reward maximization and is shown to be useful and stable enough to enable faster learning in the same task.
A few other contributions are presented which I list below:
1- a normalization approach that scales the output of the embeddings, and which is compared against some of the existing normalization approaches such as batch and layer norm.
2- checkpointing the policy during online RL, akin to supervised learning, so rather than using the latest learned policy one can continue learning by using the best-performing policy.
3- clipping the value estimates during training to ensure that the estimates remain within a meaningful range.
Very nice empirical results are then provided, most notably on the Mujoco benchmark in online RL, where the proposed approach named TD7 is capable of beating competitive baselines such as TD3 and SAC.
Strengths: The highlight of the paper is the impressive empirical results provided on the online RL experiments with the Mujoco baseline. I did my best to cross-check the performance of the baseline agents and it does seem like that TD7 is capable of beating TD3 and SAC (these are the two baselines I checked) on these benchmarks. I do see some discrepancy between SAC results reported here and those reported (for Hopper) in the original paper, but otherwise the results are consistent with published papers.
Weaknesses: I did enjoy reading this paper, and I applaud authors for their successful implementation and empirical results. That said, I am not sure what the core message of the paper is. I think the contributions need to be motivated, framed, and highlighted better. For instance, is this paper primarily about the advantage of learning state and action embeddings in RL? If yes, existing work has already demonstrated that, so other than the fact that the empirical results are superior, what kind of statements can we make about this new approach to learning embeddings that we did not know previously? Is it better for transfer learning? Is it faster in terms of run time? Is it better motivated theoretically?
Just to name a few papers, with the danger of missing other related and interesting work:
- An approach very similar to the one presented here was presented earlier by Zhang et al: "Learning Action Representations for Reinforcement Learning"
- Gelada et al also propose a very similar approach in "Deepmdp: Learning continuous latent space models for representation learning", and I am not clear in what sense SALE is doing something different than them.
- Chandak et al show in "Learning Action Representations for Reinforcement Learning" that one can learn a low-dimensional representation for actions.
These are just a few examples to show that the idea of learning state and action representations is well-explored and to me this limits the novelty of the work.
Moving to the other contributions, and in terms of the normalization approach, I am not quite positive in what way this normalization is going to hedge against the issue of collapsing all states and actions to 0. To me it seems like that if the reward is not part of the process of learning the embeddings, and we use the kind of successive predictive loss in the embedding space, then trivially the 0 solution would be optimal with or without normalization.
I found the checkpointing idea creative and interesting. That said, I have a few issues with it: 1- it seems like based on Figure 7 that checkpointing is rarely effective in the online case, and as the paper states, not applicable in the offline case. 2- it seems to me that when computing the goodness of a checkpointed policy, a potential issue is that during training we add some exploration noise to the action suggested by the network, so unless we do zero exploration, we need to account for the fact that we have not exactly executed the policy and therefore some off-policy learning is needed to compute the true goodness of the checkpointed policies. Do I get this right?
The clipping idea also makes sense, and it is interesting, but it is still a limited contribution.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In what way is the new embedding-learning approach different than the previous work? What are the advantages and disadvantages, and what can we learn about designing embedding-learning algorithms in light of your experiments?
- How is the normalization hedging against the 0 mode collapse?
- Do we not need to do off-policy learning during check-pointing to account for exploration?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the very detailed review, and we appreciate the highlighted positives in our work! We address your key points below.
**Contribution:** We will aim to tighten up the writing in the introduction and provide a stronger and clearer message for the paper. We believe our paper adds to the current landscape of representation learning in RL by answering (or adding to) the following questions:
- **What design choices should I make if I want to use representation learning for low-level states?** Almost every representation learning paper in RL is applied to image-based tasks (this includes both Zhang et al., and Gelada et al. that the reviewer mentioned). This raises some important design questions when transferring ideas from an image-based task to low-level states. What should the target in the loss function be? What should the embedding be over? Is normalization needed and if yes, how should it be used? Should learning be end-to-end or not? For each of these questions we can point to several papers which take different approaches. Our paper tests each of these choices, shows what choices matter the most, and suggests the best option.
- **What challenges should I be aware of when using state-action representation learning?** While state-only representation learning is fairly common, state-action representation learning is not. Interestingly enough, we find that using this combination can introduce new challenges (i.e. extrapolation error in online RL, section 5.1) that have not been discussed or adequately addressed in prior work.
- **I just want the highest performing continuous control algorithm for online or offline RL.** The MuJoCo benchmark has been the cornerstone benchmark for continuous control for many years, and many important algorithms such as PPO, TD3 and SAC have been designed around it. This space is highly saturated and improvements over TD3/SAC are either incremental or come at high computational costs. TD7 suffers from neither of these problems, as it significantly outperforms existing methods (including the aforementioned expensive ones!) and comes at a much lower run time cost than the expensive methods (for run time see Appendix I).
**Related work:** There is a lot of important related work in this area and we were careful to cite all three of the papers mentioned. Our method is similar to many existing methods in that they all do representation learning from a dynamics-modelling based objective. We cite at least a dozen papers which also use this same idea. This idea appears often in the literature because it is a fundamental concept relating to Bisimulation metrics. The important question is how do we best implement this idea with complexity of deep RL where there is function approximation, continuous states and actions, finite data, etc. We believe our work adds to this important story on the empirical side.
Specifically, here’s how our method differs from the mentioned approaches:
- **Zhang et al.** The title listed is presumably a copy error. Our best guess is “Learning Invariant Representations for Reinforcement Learning without Reconstruction” by Zhang et al., but let us know if otherwise. The primary goal of this paper is abstraction in the state space for visual tasks. In SALE we are interested in feature learning in a low-level state space, rather than abstraction in a complex visual space. Instead of trying to summarize the input, we are trying to expand the input with useful features. We also learn a representation over state and action, rather than just the state. This distinction is important– In section 5.1 we find that doing so introduces new challenges which need to be addressed.
- **Gelada et al.** is closely related. A few obvious differences: (setting) they work in the image space with discrete actions, (conceptual) they only learn a representation over the state, (design) they include the reward in the target, they learn the representation end-to-end as an auxiliary loss, they don’t use a target network for the encoder. In section 4.2 in our paper, we find that the nuances in design choices can have a significant impact on performance. Similar to Zhang et al, they work in a visual space and learn state-only features.
- **Chandak et al.** is similar in that they consider actions in their representation. However, while we consider both state and action, they don’t consider the state at all. Furthermore, their approach to representation learning is very different from ours, based on a policy gradient method, rather than learned from a disentangled signal.
**Normalization:** With AvgL1Norm it is impossible for the embedding to be 0. Even for a very small $\epsilon$ we have $\frac{\epsilon}{ \frac{1}{N} \sum \epsilon} \approx 1$. Prior work does something similar by normalizing only in the loss function. It turns out that normalizing the embedding is important experimentally, since we use the embeddings concatenated with the original state-action input.
**Checkpoints:** Not sure which figure you are referring to (7 is a typo presumably). Regardless, the value of checkpointing is more in stability than raw performance, since instability can get averaged out. In the general response (and the attached PDF), we show how checkpoints improve the stability of RL algorithms (Figure 4).
There is indeed some bias introduced by the exploration noise used during checkpoints. However, this isn’t necessarily problematic since the checkpoint is more likely to prefer policies which are robust to noise in the action space. One issue that could arise is if the random exploration noise results in some kind of positive benefit. To combat this, we actually use the minimum performance over N episodes. This choice is discussed and evaluated further in Appendix F.
---
Rebuttal Comment 1.1:
Comment: Thanks for the added experiments, it does strengthen the case for the paper.
I am still lingering on the effect of the proposed normalization step. Can you clearly define the optimization problem being solved here, and demonstrate how the normalization step is hedging against the network collapsing to trivial solutions?
"However, this isn’t necessarily problematic since the checkpoint is more likely to prefer policies which are robust to noise in the action space."
I do understand that this empirically would not be a terrible idea, however, I meant that to estimate the utility of a checkpoint policy in an unbiased way, one needs to be doing off-policy policy evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you for the quick response!
**Embeddings**: In the context of training the embeddings, what we do is very similar to BYOL [1] and SPR [2], who only apply normalization in the loss function. The main difference between normalizing the embedding and the loss function is that the final embedding used by a downstream network (the value function in our case) will be normalized. During training, the process is effectively identical.
BYOL has a discussion on collapse in Section 3.2 of their paper which is relevant to our approach. The main intuition is that while there do exist trivial uninformative solutions, there is no reason for the method to actually converge to them. This is because we are optimizing a loss towards a target network which changes independently of the loss function.
Formally studying this optimization process is an important problem in both RL and self-supervised learning, but definitely out of the scope of this project. Empirically we haven’t found any issues with representation collapse, and this observation is consistent with work based on BYOL. We also experiment with some settings that are closer to what SPR/BYOL propose in the design choices section of our paper (Section 4.2) and find our approach is better for our use case.
**Checkpoints**: Yes, you are certainly correct that to get an unbiased estimate of the checkpoint we need to do a form of unbiased off-policy evaluation. What our method does is use a proxy measure of the quality of the checkpoint policy (the online exploration policy performance).
There are a lot of potential alternate approaches that could be used (for example, delving into OPE literature). Even naively there are simple adjustments that could be made to minimize bias, such as running a potential checkpoint for X episodes with no exploration noise if it surpasses a threshold or the previous exploration policy.
However, in practice we found that our proxy was accurate and does not come at any additional costs in terms of algorithmic complexity, run time, or sample-efficiency. Regardless, we think further studying the use of checkpoints of RL is an exciting research direction and this is just an initial outline for a potentially very valuable strategy for RL.
**References:**
- [1] Grill, Jean-Bastien, et al. "Bootstrap your own latent-a new approach to self-supervised learning." Advances in neural information processing systems. 2020.
- [2] Schwarzer, Max, et al. "Data-Efficient Reinforcement Learning with Self-Predictive Representations." International Conference on Learning Representations. 2020. | null | null | null | null | null | null |
Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge Ensembles | Accept (poster) | Summary: The authors provide a theoretical analysis of the case of ensemble learning with linear ridge regression for the case where heterogeneous feature subsampling is used. The authors make a number simplifying assumptions about the distribution of the data (Gaussian distribution and noise, linear function), and using the replica trick from statistical physics derive generalization error for the L2-regularized least-squares solution. In simulations, the authors demonstrate that the derived solution coincides with numerically calculated generalization error, and based on analyzing learning curves provide a number of insights about the behavior of ensembled ridge regression, suggesting a novel way for mitigating the double descent phenomenon through the use of heterogenous feature subsampling in ensembling.
Strengths: The work provides novel insights about how double-descent behaves based on noise, regularization, and subsampling and provides alternative strategies to regularization for avoiding the peak. These are quite central questions in modern machine learning theory and new ideas here can prove to be significant. The mathematical analysis seems rigorous and the replica trick used for the analysis appears to be a new type of tool for studying this phenomenon, though I do not have suitable mathematical background to follow all the proofs in detail. The basic setting and assumptions are clearly defined, related work seems to be covered well, and novelty of the contribution compared to related similar works is clearly established.
Weaknesses: The work makes many simplifying assumptions that are unlikely to match most real-world learning problems (e.g. Gaussian data, linear teacher function), and analyzes may not be in practice tractable beyond the simpler cases such as analyzed in Section 2.3. with globally correlated data. It is not so clear whether the results would offer practical tools for an analyst wishing to apply heterogeneous feature-subsampled ridge ensembles in an optimal way, though results in the supplementary materials about CIFAR10 Classification Task suggest the results can have also practical relevance.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How computationally demanding is the computation of the generalization error curves?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *Weaknesses:
The work makes many simplifying assumptions that are unlikely to match most real-world learning problems (e.g. Gaussian data, linear teacher function), and analyzes may not be in practice tractable beyond the simpler cases such as analyzed in Section 2.3. with globally correlated data.
It is not so clear whether the results would offer practical tools for an analyst wishing to apply heterogeneous feature-subsampled ridge ensembles in an optimal way, though results in the supplementary materials about CIFAR10 Classification Task suggest the results can have also practical relevance.*
We thank the reader for raising this point. To address this weakness, we will add the following sentences to the discussion section:
“A large line of work has shown that Gaussian data approximations accurately predict the learning curves in many real world problems, particularly when using a least-squares loss function (see, for example, Pesce [2023], Canatar [2021], Gerace [2022], Hu [2022]).”
Referring to the following papers:
Pesce [2023]: https://doi.org/10.48550/arXiv.2302.08923
Canatar [2021]: https://www.nature.com/articles/s41467-021-23103-1
Gerace [2022]: https://arxiv.org/abs/2205.13303
Hu [2022]: https://arxiv.org/abs/2009.07669
We have also performed our own realistic simulations which are presented in the supplemental material, demonstrating that our main insights carry over to the CIFAR10 classification task. We will add the following sentences to section 2.5 of the main text to highlight these findings:
“In figure S1, we train linear models to predict the labels of images from the CIFAR10 dataset from subsamples of the features in the top hidden layer of a pre-trained deep neural network. We find that heterogeneous ensembling (over linear models with varying input dimensionality) prevents catastrophic over-fitting, yielding monotonic learning curves without the need for regularization. (see SI for details)”
*Questions:
How computationally demanding is the computation of the generalization error curves?*
All of the computations necessary for this paper can be performed in a matter of hours on a single GPU. Computing the theoretical generalization error curves can be expensive (taking many minutes) in the general case where the result is given as a contraction of matrices, and where solving the fixed point equations for the order parameters requires inverting a large matrix at each step. However, in the special case of globally correlated (equicorrelated) features, we obtain accurate, fully analytical expressions which can be plotted instantly. If accepted, we will add a note about the computational complexity of the theoretical and empirical calculations of the error curves to the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications provided. | Summary: The authors provide theoretical results on generalization for ridge regression for the case of an ensemble of regression models trained on feature subsets, with noise, and correlation.
They relate this to the previously observed double-descent phenomena and characterize how the fraction of subsampled features, the regularization strength, and other properties affect the double descent phenomenon and where it occurs (i.e., how these impact generalization error).
Finding that double descent occurs at different places for different feature subset amounts, they also propose and analyze using a heterogenous ensemble of different feature subsampling sizes as a way to mitigate double descent without having to tune regularization or model architectures.
****Update****
Revised my score after the responses - see my response comment for rationale.
Strengths: The generalization bounds for the particular setting seem novel.
The problem is a highly relevant one and the idea of using feature subset ensembles is interesting and potentially useful- as alluded to in the conclusion - as searching over many architecture and regularization settings for large neural nets may be prohibitive, so this kind of approach could provide an alternative for such cases with a fixed architecture, by creating a feature subsample ensemble instead - however this possibility is just alluded to and not tested, as only theoretical results for simple ridge regression problems are analyzed.
The authors provide thorough analyses of the results of the theorem for generalization error, to try to elucidate what different aspects of the ridge regression problem and choices for the modeling mean in terms of generalization error.
Weaknesses: 1) The paper is hard to follow and extract any useful meaning or conclusions from.
a) Some terms are not defined or given adequate explanation. As particular example - it would be helpful to explain what "readout noise" and readout dimensionality mean in this paper when first introduced - this was not clear to me and I found its use to be confusing in the theoretical setup section and even the related work. It's just mentioned without defining what it is or how it related to the model, and it would help to give an example of what it would correspond to.
b) A lot of different parameters are introduced and hard to remember when trying to connect to the various results figures and analyses - it may help to remind the reader and also include simple examples of models describing what the different parameters and values would correspond to.
c) I feel the main results and conclusion need a simpler explanation and deciphering for the reader - as the various plots and analyses are still based on a large number of symbols / parameters that it's hard to remember, and the reader would have to dig through the text each time to recall what they are. Some simplified example and conclusion, as well as a table may help. It's hard to come away with any kind of recommendation or firm conclusion of what we could expect to work better in practice as well.
d)
2) It's not clear how useful and impactful the results from the paper and the derived theorems are.
It seems the complicated expression derived is too complex to derive any useful properties, and requires knowing ground truth properties - so the benefit is not clear.
Even in the simplified tractable case - it's not clear what interpretation comes out of it.
In generally it's not clear if any novel and useful findings have come out of this.
I think a key part of the issue might be - after presenting each set of results / figures, some interpretations and conclusions and take-ways for the reader are needed and currently lacking - results are just presented with little explanation or analysis.
3) Related to the previous points, I feel having real data and model experiments and results could really strengthen this work and also more fully explore the idea and benefit proposed of using heterogenous feature sub-sampling ensembles.
4) Novelty is also somewhat unclear.
Obviously, heterogenous feature subsampling ensembles have been studied and applied in the past, and for some specific models there are theoretical results on generalization error as well, such as tree ensembles or even linear model ensembles with different kinds of approaches such as random subspace models.
Additionally, the authors point to some specific related work as well on theoretical study of generalization error as well. However, clearly spelling out the differences in this work seems lacking. For example, in related work - they mention other work also studying feature subsampling for ensembles and finding it provides an implicit regularization approach, in particular [20]. However, it's not stated what differentiates this work from this prior work - what novel aspect does this provide? It's necessary to spell out what limitations of previous work this addresses and overcomes.
I.e., overall it would help to clearly spell out how their work is different and novel with respect to prior work on feature-subsampling ensembles and what new conclusions and approaches result from it.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors:
"...correlated data by ensembles with heterogeneous readout dimensionality."
This is not very clear - what is meant by "readout dimensionality"? I'm assuming
it means heterogeneous input data dimensionality - so why not just say that? "Readout dimensionality" is a term I've almost never heard and sounds like it mean the dimension of the model output, which doesn't make sense in this case.
It would be helpful to explain what "readout noise" means in this paper when first introduced - this was not clear to me and I found its use to be confusing in the theoretical setup section. It's just mentioned without defining what it is or how it related to the model, and it would help to give an example of what it would correspond to. After several re-reads I concluded it probably corresponds to noisy model outputs during training when using something like dropout. However, for the most cases in practice for regression, even if noise is used in training, when applying the models in practice deterministic outputs are used (no longer any "readout noise").
2.6 is hard to follow - again what is mean by each type of "readout"? What is a "single fully-connected readout" and what does it mean by "multiple sparsely connected readouts"? I'm assuming the latter is the feature subsampling ensemble, and the former a single model with no feature-subsampling, but this is described in a confusing way so it's not clear and took some though to try to decipher this.
What is mean by correlation code and correlation strength? How are these defined, and how are they used to generate the models / how are they used in these analyses and results?
Also how do these conclusions differ from past studies of ensembling approaches?
Why in all the examples is P <= M? It would be nice to explain this as well, and also what happens when P > M. I.e., after some thought I assume this is to focus on the over-parameterized regime, but this is never really stated and explained, and there are still many practical cases where P >> M.
What impact does k have on generalization error?
Also wondering how the results compare to the non-ensemble case? I.e., a single model with no feature subset sampling.
I.e., this shifts the problem to tuning the subset sample fraction sizes
Also it's not clear to me from the results and figures if tuning lambda (regularization) can always result in better performance then using the heterogenous ensemble?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes - doesn't seem needed in this case.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for writing a thorough review of our submission. Please see the response to all reviewers for a discussion of the meaning of “readout noise”, a proposed table clarifying the meaning of the parameters referenced in proposition 2 and figures 2, 3, and 4, discussion of a comparison between ensembling and L2 regularization, and a discussion of applications to real data.
Thank you also for pointing out that “readout dimensionality” is confusing terminology. This confusion may be due to the fact that in linear regression the number of input features is equal to the number of model parameters, which we referred to as the “readout dimensionality.” If accepted, we will remove this term from the paper, and instead refer to the “input data dimensionality” as suggested. We clarify that the model output is one-dimensional for the setup described in propositions 1 and 2 and all figures of the main text. Our qualitative finding that heterogeneous input data dimensionality can mitigate double-descent carries over to simulations of the CIFAR10 classification task where the output dimensionality is 10, as shown in the SI.
On terminology – You have assumed correctly that a “single fully-connected readout” is a single model with no feature-subsampling, and “multiple sparsely connected readouts” is the feature subsampling ensemble. We will update this sentence to the following: “We now ask whether ensembling is a fruitful strategy – i.e. whether a feature subsampling ensemble outperforms a single model without feature subsampling.”
Thank you for prompting us to clarify the novelty of our work. We introduce and study a new form of heterogeneous subsampling, and demonstrate its benefits in an analytically tractable setting. Specifically, we consider ensembling over models with heterogeneity in the number of features connected by each ensemble member. All prior research that we are aware of, including [20], considers only the setting in which all ensemble members have the same size. We emphasize this distinction in the “Related works” section, writing: “However, [18] and [19] focus their analysis on the case of isotropic data and Gaussian random masks of homogeneous dimensionality. In contrast, we explicitly consider learning from correlated data by ensembles with heterogeneous [input data dimensionality].” More work is required to establish and study the practical benefits of this form of heterogeneity. If accepted, we will acknowledge this more clearly in the final version of the paper. This is also the first work we are aware of which presents a detailed study of the interplay between ensembling and readout noise, or to produce phase diagrams of the type shown in figure 4.
Thank you for pointing out that Section 2.6 is hard to follow. This will be re-written to reflect an updated version of the special case in proposition 2. The parameters of the phase diagrams in figure 4 will be re-interpreted in terms of an effective signal-to-noise ratio and data-task alignment. Please see the response to all reviewers for an overview of these developments and the attached updated version of figure 4.
Also how do these conclusions differ from past studies of ensembling approaches?
Past studies of ensembling approaches have typically aimed to establish formal equivalences between optimal ensembling and optimal regularization. The optimal subsampling fraction, though, depends on the size (number of samples) of the training set. In this work, we present heterogeneous ensembling as a strategy to mitigate double-descent in settings where the size of the training set is unknown. We are the first study to explicitly analyze the behavior of ensembles with heterogeneous input data dimensionality and to recognize the benefits of this type of heterogeneity to mitigate double-descent.
To clarify why P<=M in all the figures, we will add the following sentence to the discussion:
“As ensemble members are trained independently, the learning curve cannot be divided into an under-parameterized and over-parameterized regime. Rather, at any sample size $P$, each ensemble member with $N_r<P$ is in its over-parameterized regime and each ensemble member with $N_r>P$ is in its under-parameterized regime.”
Because of this, the plots shown cover both the under-parameterized and over-parameterized regimes of each member of the ensemble (since $N_r \leq M$ for each ensemble member)
To clarify the impact of k on generalization error, we will add the following sentences to the discussion of figure 4 at the end of section 2.6:
“These plots show the effect of the ensemble size $k$ on the generalization error curves for ‘homogeneous ensembles’, where each ensemble member sees the same number of features, and in the absence of regularization. We find that increasing $k$ may either help or hurt generalization performance, depending on the parameters of the task, and the size of the training set. The shape of the learning curve changes with $k$ to shift the double-descent peak to the value $\alpha = \frac{1}{k}$.”
However, readers may still wonder how the value of $k$ affects generalization at finite ridge, and in heterogeneous ensembles. If accepted, we will clarify this by adding panels to figure 3 which plot learning curves for ensembled linear regression at finite ridge $\lambda$ for ensembles of a handful of $k$ values, as we do for the CIFAR10 classification problem in figure S1.
To the question:
*Also wondering how the results compare to the non-ensemble case? I.e., a single model with no feature subset sampling. I.e., this shifts the problem to tuning the subset sample fraction sizes*
Calculating the optimal subset sample fraction sizes is precisely what we do in the phase diagrams in figure 4. Please see the response to all reviewers for an updated version of figure 4 and subsequent analysis.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses - I think the proposed changes are very helpful. The table of key parameters / terms is very useful to quickly look things up.
On further thought - I feel one other possible thing that also may help a bit is using the commonly used symbols in ML that people are familiar with - as they may more easily have an idea of what is meant. For instance - sample size is most commonly represented with "N" or "n" (and more rarely "m"), but here "P" is used. Similarly feature dimension is most commonly "p" (though sometimes "m" as well as is used here). But all these symbols are used to mean different things here.
Also it would be best to mention the additional results in the supplementary materials in the main paper, even if they can't be fit in the main paper - in particular mentioning experiments were also performed on Cifar 10 and briefly summarizing the results. I think these set of experiments are very helpful in illustrating the point and validating the idea on real data. It still would be nice to have results on additional datasets - to show how this behavior holds more generally.
Overall I am more inclined to lean on the accept side after the responses and other discussions. I think it is an interesting work that is useful for researchers to see. I still feel the presentation could be improved and hopefully the promised changes will address this. I.e., it just reads very busy and hard to follow and parse in its current form - even some of the experiment results which are nice I expect could be quite difficult to parse for readers. Part of this is due to the large amount of parameters / variables included, but I can understand its difficult to get around these in this kind of theoretical work. | Summary: This paper introduces a theoretical investigation into ensembling methods applied to linear ridge regression with feature subsampling. It builds upon previous research in this area by extending the analysis to include scenarios with varying readout dimensionality.
By employing the replica method, the authors derive expressions for the average-case generalization error. These expressions capture the influence of both the data structure and the hyperparameters of the ensembling method employed.
Furthermore, the paper focuses on a simplified scenario to examine how the degree of subsampling and heterogeneity affect the occurrence of the double descent phenomena in learning curves for generalization error. This analysis provides potential insights into the understanding of this well-known and recurring phenomenon in machine learning.
Strengths: - The paper is clearly written, and the experiments presented are of high quality.
Weaknesses: - The authors use as proof technique for the main theorem the replica method, a heuristic method coming from statistical physics. While this method has been proven to be rigorous in certain contexts, its general correctness has not yet been formally established. Unfortunately, the paper fails to acknowledge this limitation, which undermines the overall credibility of the research findings.
- In terms of technical results, the computation leading to preposition 1 appears to be a minor modification of the calculation presented in a previous work by Loureiro [2022]. However, unlike the present study, Loureiro's work accompanies the calculation with a rigorous proof. This discrepancy raises concerns about the rigour and robustness of the current paper's technical contributions.
(Minor) Typos:
- Line 4: the the
- Line 53: studied studied
- Line 212: eta
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Would the authors consider revising the presentation of their results from Proposition 1 to reflect the heuristic nature of the findings? This would involve explicitly stating that the results are derived using a heuristic method, thereby providing clarity to readers about the level of mathematical rigour employed in the study. Such a revision would enhance transparency and enable readers to appropriately interpret the results.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: While the authors do acknowledge certain limitations in their work and propose potential new directions for applying the techniques employed, there are a couple of crucial aspects that remain unaddressed. Firstly, the paper fails to explicitly mention that the replica method utilized lacks mathematical rigour, which is an important consideration for interpreting the validity and reliability of the results. Additionally, the authors do not provide any indications or suggestions on how to potentially establish a more rigorous foundation for their findings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for pointing out typos, we will correct them. We thank the reader for their comments and suggestions. The weaknesses identified in this review revolve mainly around the fact that we have derived our results using the replica method from statistical physics. We do in the remarks made directly after the statement of our main result in the main text plainly state that the replica trick is a “non-rigorous but standard” approach. We make no claim that our results are fully rigorous, but because the results obtained from the replica method have been demonstrated to coincide with rigorous results for many similar problems and our results show excellent agreement with numerical experiments,, we have good reason to expect that our result is correct. However, in light of the concerns raised here we see it fit to change the statement of our main result to further emphasize that the replica trick is not rigorous, and to establish a rigorous basis for our main result.
In the case where the data covariance matrix has a bounded spectrum, we believe that our results may be obtained through a clever special case of the rigorous result of Loureiro [2022]. While their derivation does not explicitly consider ensembles with variation in the number of features viewed by each ensemble member, this type of heterogeneity may be added in post-hoc by choosing data covariance matrices which “zero out” a number of neurons which varies over the ensemble. If accepted, we will add a supplemental section discussing this correspondence and including a detailed derivation of our result from the general result of Loureiro [2022]. We will also update the “proof” of the main theorem to the following:
“We calculate the terms in the generalization error using the replica method, a standard but non-rigorous method from the statistical physics of disordered systems. The full derivation may be found in the SI. In the special case where the covariance matrices $\Sigma_s, \Sigma_0$ have bounded spectrum, this result may be obtained as a clever special case of the results of Loureiro [2022] (see SI for derivation).”
We will also add the following remark after the proof of proposition 2:
“Note that, as in this case $\Sigma_s$ does not have a bounded spectrum, this result does not follow from the rigorous results of Loureiro [2022]. However, we find excellent agreement between theory and experiment when data dimension is sufficiently large.”
We emphasize that, even as our general result may be recovered as a special case of the results of Loureiro [2022], simplifying this general result in the special case of subsampling from globally correlated features is a very tedious calculation which requires significant work. Our investigation of this general result also differs entirely from Loureiro [2022], which considered only ensembles with the same number of readout weights across ensemble members, and included no study of feature noise or readout noise.
---
Rebuttal Comment 1.1:
Comment: Thank you for taking the time to address my criticisms.
I have raised my grade to "5: Borderline accept"
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our rebuttal and modify your review. | Summary: This article characterizes the asymptotic performance curve of a heterogeneous feature ensembling framework for ridge linear regression, in the limit of comparably large numbers of data samples and variables. For having different error peaks in a double-decent performance curve as a function of the data sample size, predictors built of heterogeneous feature sets give rise to an ensemble that is robust to the number of data samples, without a carefully tuned regularization parameter.
Strengths: * Clarity of the mathematical setup.
* Numerical validation of asymptotic results on finite data sets.
* Discussion of the theoretical implications of the analysis.
Weaknesses: * The analysis applies only to the square loss while recent related work addressed already a large family of convex losses.
* Even though the interest of heterogeneous feature ensembling is explained as mitigating the double-descent phenomenon, this problem can be in fact settled by optimizing the regularization parameter on a cross-validation set.
* There is no experiment on real data sets to see how well the observed consequences such as the mitigated double-descent curve apply to real data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I wonder if the optimal learning performance is achieved at a uniform subsampling rate (i.e., homogeneous ensembles); and whether the feature ensembling can indeed improve the performance without readout noise, as is the case with readout noise (and under certain conditions on the data model parameters) according to the discussion of Section 2.6.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are discussed. There does not seem to be any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for writing a helpful review. Your questions about cross-validated regularization and real-world experiments are addressed in the global rebuttal and attached PDF. | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews. Please find below a description of updates to the paper and responses to comments which were raised in multiple reviews.
-We have changed the definition of the “readout noise” so that it is present both during training and evaluation of the model. This leads to a minor change in the form of the generalization error. Now, the contribution due to the readout noise goes as $\langle E_{rr’} \rangle \sim \frac{\delta_{rr’} \eta_r^2}{1-\gamma_{rr’}}$
-We have changed “globally correlated” to “equicorrelated” to agree with previous literature.
-We have updated proposition 2 to be more general, including an overall scale of the data, isotropic feature noise, and allowing for overlap in the features sampled by different readouts in the ensemble. In the updated statement, we simplify the general expression for the generalization error from proposition 1 under the following special case:
>$$ \mathbf{w}^* = \sqrt{1-\rho^2} \mathbb{P}_{\perp} \mathbf{w}^*_0 + \rho\mathbf{1}_M $$
$$ \mathbf{w}^*_0 \sim \mathcal{N}(0, \mathbf{I}_M) $$
$$ \mathbf{\Sigma}_s = s \left[(1-c) \mathbf{I}_M + c \mathbf{1}_M \mathbf{1}_M^\top \right] $$
$$ \mathbf{\Sigma}_0 = \omega \mathbf{I}_M $$
-To enhance readability, we will use the extra space to include a table of parameters relevant to proposition 2 and figures 2, 3, and 4. This table is included in the PDF attachment.
-Figure 4 has been updated (see attached PDF) and the surrounding discussion will be updated as follows:
>In the ridgeless limit, we can then express the error as : $E_g(k) = s(1-c)F(H, k, \rho, \alpha)$, where
$H \equiv \frac{\eta^2}{s(1-c)}$ is an effective inverse signal-to-noise ratio and
$F(H, k, \rho, \alpha)$ is a rational function of its arguments (see SI for full expressions). Thus the value
$k^*$ which minimizes error depends on
$\eta$,
$s$, and
$c$ only through the ratio
$H$.
>Using our analytical theory, we plot the optimal number of readouts
$k$ in the parameter space of
$H$ and
$\rho$ (see Fig. 4a). The resulting phase diagrams are naturally divided into three regions. In the signal-dominated phase a single fully-connected readout is optimal ($k^* = 1$). In an intermediate phase, $1<k^*<\infty$ minimizes error. And in a noise-dominated phase $k^* = \infty$... As is evident in these phase diagrams, an increase in H (decrease in SNR or increase in c) or an increase in $\rho$ causes an increase in $k^*$.
-We will clarify our motivation for considering a “readout noise” by adding the following paragraph to the introduction:
>“Subsampling from different sets of features introduces variance between members of an ensemble. However, other sources of variance may also distinguish members of an ensemble of predictors. For example, random initialization or weight dropout during training may introduce variance in an ensemble of deep networks trained with gradient descent. Physical neural networks may also have intrinsically noisy neurons which introduce variance between ensemble members. To capture these effects which are not naturally present in the regression setting, we introduce an explicit “readout noise” which is drawn independently for each ensemble member. We provide a detailed analysis of the interplay between feature subsampling ensembles and readout noise in the special case of equicorrelated features.”
We will add the following remark after the statement of our main result:
“Remark 4: “Readout noise” is a noise which is added to the prediction of each member of the ensemble of readouts before they are averaged. The readout noise parameter $\eta$ represents all additional sources of variance across predictors not accounted for in our simple model. For example, in an ensemble of deep neural networks trained with gradient descent, there will be additional variance across predictors due to random initialization of the networks (Atanasov 2023). Additional variance may also be present in physical neural networks, such as an analog neural network or biological neural circuits due to inherent stochasticity in the physical mechanisms of computation.”
And by adding the following sentence the discussion of Figure 4 in section 2.6:
“The resulting phase diagrams demonstrate that partitioning data features amongst multiple readouts can be beneficial when there are sources of variance between ensemble members beyond that induced by sampling from separate sets of features.”
-On the need for real data and model experiments – we have included in the supplementary material an application of heterogeneous ensembles to the CIFAR10 classification task. While the classification setting does not correspond to the regression setting of our main theorem, the qualitative behavior carries over to this real-world task – we observe that heterogeneity in the number of features seen by each member of the readout ensemble mitigates double-descent even without regularization. We plan to significantly expand this portion of the paper to demonstrate the benefits of heterogeneous ensembling for image classification with pre-trained feature maps (for example, a pre-trained ResNet).
-A single fully connected readout with optimal L2 regularization does perform better than a feature subsampling ensemble. However, training a feature-subsampling ensemble is much faster than training a single fully-connected readout layer. (Assuming cubic scaling of the matrix inverse operation, training $k$ models of size $M/k$ scales as $1/k^2$). Further, introducing heterogeneity of the type we study into the ensemble gives the added benefit of avoiding catastrophic over-fitting without a task-tuned regularization. This approach is most likely to find applications in settings where training time or computational resources are limited. If accepted, we will add these points to our discussion. We will also add curves to panels c.i-vi which show the learning curves for a single model with optimal regularization.
Pdf: /pdf/1e73ae5d6d030ad5ac0aacc276da546800755cb2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides an asymptotic analysis of ensembles of ridge regressors using varying numbers of subsampled features. The authors consider a Gaussian data model with feature noise and readout noise in addition to label noise. Using the replica method from statistical physics, they obtain precise asymptotics of the limiting ensemble risk for any collection of subsampling operators. They specialize this result in the case of globally correlated isotropic data and numerically plot double descent curves for various non-overlapping ensemble strategies. They find that in poorly regularized regimes, increased heterogeneity of feature subset sizes leads to a significant decrease of worst-case generalization risk, in other words dampening the double descent peak, albeit typically at the expense of increased risk in other data aspect ratios. Departing from heterogeneity to consider instead homogeneous non-overlapping ensembles, the authors further demonstrate that in the presence of readout noise, there are regimes in the phase space of readout noise level and global feature correlation in which it is optimal to use an ensemble rather than a single learner, although without readout noise it is never optimal to ensemble.
Strengths: The biggest strength of this paper in comparison to existing work on linear ensembles of random projections of features is that this analysis applies to arbitrary projection operators that need not be independent. In fact, for the specific cases the authors consider of non-overlapping feature subsets, which would correspond closely with distributed optimization practice (I recommend the authors emphasize this in their introduction), the projection operators are clearly very much not independent. Furthermore, the most general form of their results in Proposition 1 also makes no assumption on the relationship between the projection operators and the covariances, in contrast with most existing work which assumes them to also be independent.
I am also not aware of any other work which explicitly considers heterogeneous linear ensembles, which is a novel contribution.
Weaknesses: I think that the biggest weakness of this paper is the justification of the data model and of the advantage of using a (heterogenous) ensemble. I think that these could be improved by adding justifying and motivating comments throughout the paper, and if the authors would indicate how they would make such changes for the camera ready version, I would increase my score of the paper.
Data model:
- The features are corrupted with additional noise that is independent of the label, which is an interesting but highly non-standard assumption. Even more non-standard is the assumption that this noise is drawn independently in every forward pass. This assumption really should be motivated---should the reader be imagining this as something like noise due to dropout in a neural network?
- The model outputs are corrupted with readout noise independent across each member of the ensemble, again drawn independently each forward pass. This seems like an even stranger assumption to me; I could imagine some negligible computational noise, but I struggle to come up with a setting in which we would see the noise levels that are modeled in this paper. It is very important to motivate this, as the experiment in Figure 4 only justifies ensembles in settings where readout noise is high. Without good motivation, it seems like a strawman.
Ensemble advantages:
- This is a complaint I have for many other papers that deal with double descent, but applies equally to this paper. The "correct" solution to double descent is to properly regularize, and other strategies such as the ensembling proposed here can be seen as "hacks" that can mitigate some of the worst double descent effects but are typically suboptimal. We see this in this case as well. E.g., consider Figure 3, where the heterogenous ensembles only improve risk in the under-regularized setting. In the setting with more appropriate regularization (c.iv-vi), where the risk is lower than in (c.i-iii) for most values of $\alpha$, the heterogeneous ensemble is strictly worse than the homogeneous ensemble, which I am sure itself is strictly worse than the optimally tuned full ridge regressor.
- No comparisons are made to the full ridge regressor. Obviously, the ensemble would have to be worse, as ridge regression is typically the minimum mean squared error estimator in additive noise settings. However, this is a lost opportunity to demonstrate perhaps that bad double descent effects could be mitigated at a fraction of the computational cost (since the ensembles consider small subsets).
- Essentially, the risks are a linear combination of the individual risks of each ensemble member, plus some cross-terms. Thus in the limit of zero regularization, these ensembles would still have infinte double descent peaks, one for each different subsampling ratio. So to an extent, the authors already rely on regularization to help make the ensemble robust to double descent, although the benefits are realized for fairly small regularization already. Still, the authors should justify the need to regularize even with ensembles and discuss tuning.
- Related to the previous points, it's not clear why worst case risk is an important metric, since tuning of regularization in some way or another seems necessary.
- Part of the issues of statistical performance may be due to the non-overlapping partition of features. In the related works, ensembles with optimality guarantees typically require many members with overlapping subsets of features, which the authors do not consider here.
Others:
Overall, I found the plots to be very busy and difficult to decipher. I give one suggestion on how to improve Figure 4 in Questions.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Instead of the replica method, do the authors think that the same result could be obtained by the more rigorous "deterministic equivalences" from random matrix theory [DS21]? Related work has recently successfully applied this technique to random projections, which should be directly applicable to ensembles [LPJBT22], which seems closely related to the future direction raised by the authors at the end of the paper.
In Figure 4.a.i, I cannot understand the meaning of the white dashed line. I thought it should indicate the boundary between when $k^* = 1$ and $k^* = 2$, but I cannot figure out why there is a purple region above the line, and why the caption says "there is no region of the phase
space where $k^* = 2$." Another suggestion is to add markers in the (a) plots to indicate the points in phase space that correspond to the (b) plots.
[DS21] E. Dobriban and Y. Sheng, Distributed linear regression by averaging, The Annals of Statistics, 49
(2021), pp. 918 – 943, https://doi.org/10.1214/20-AOS1984.
[LPJBT22] D. LeJeune, P. Patil, H. Javadi, R. G. Baraniuk, R. J. Tibshirani, “Asymptotics of the sketched pseudoinverse,” 2022, https://arxiv.org/abs/2211.03751
Other remarks:
- 1: "Bagging" is conventionally short for "bootstrap aggregating", but you only subsample, which is often called "subbagging" in the literature.
- 2: Ensembles do reduce variance, but subsampling feature adds bias.
- 4: "the the"
- 212: "eta" should be $\eta$
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: While I understand the usefulness of the replica method for obtaining quick and often accurate results, it is unfortunate that it is a non-rigorous approach. The authors have mentioned this in the supplemental material, but I think it needs mention somewhere in the main paper. This is important, as it is not clear whether some of the things done here are "legal"---for example, many random matrix theory results which could be applied here to perform similar analysis require covariance matrices with uniformly bounded operator norm, yet the globally correlated model violates this, which would typically require additional care to ensure that the result should hold, rather than simply plugging in the covariance. The reader deserves to know that while the general intuitions are likely true to hold, they should be careful in blindly applying the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for writing a thorough review of our submission. Please see the global response as well as the below.
We had two interpretations of the feature noise in mind when creating the problem setup. The first is as an inherent noise due to stochasticity in a physical neural network, such as an analog neural network or a biological neural circuit. The second is as a corruption of the input data. In both cases, the noise is present in both training and test data. Whether the noise is introduced through noise in the data itself or introduced through noise in the neural network computation is a matter of interpretation—but leads to an identical mathematical model. If accepted, we will clarify this in the statement of and discussion following proposition
Thank you for your comment that regularization is the “‘correct’ solution to double descent,” which we agree with. We do not argue that heterogeneous ensembling is the best regularizer, but to introduce heterogeneous ensembling as a regularization method and study its properties. We will add to figure 3c comparisons with the learning curves at optimal L2 regularization. We will also add the following sentences: “in the feature subsampling ensemble, we find that heterogeneity of this type smooths out the double-descent peak of the learning curve. However, when computational resources permit it, a single fully-connected readout layer with optimal l2 regularization outperforms a feature subsampling ensemble, unless additional sources of variance over the ensemble are introduced (see section 2.6).
Thank you for raising the interesting point that error diverges at zero regularization. Note that the genuine divergence of the generalization error is an artifact of the infinite-dimensional limit. We also show in the supplemental material that heterogeneous subsampling can mitigate double-descent without any regularization in a classification setting. The linear regression model with small positive regularization serves as a fruitful demonstration of the benefits of heterogeneous ensembling. We will add that “in the infinite-dimensional regression setting, a small regularization is always necessary to prevent divergence of the generalization error at the interpolation threshold of each ensemble member.” We will also add to figure 3c comparisons with the learning curves at optimal regularization..
In a regression setting, where regularization is necessary to prevent divergence of the generalization error, we propose the following procedure to quickly obtain a robust predictor:
Pick a reasonable but small regularization parameter.
Train an ensemble of linear predictors on varying numbers of data features.
Through this procedure, one can avoid the sharp edges of double-descent without a computationally expensive parameter sweep. We will update our discussion to make this clearer. In the CIFAR10 classification setting, we demonstrate that heterogeneous ensembling smooths out the double-descent curve at zero regularization, as in a classification task percentage error is naturally bounded.
We have since updated our calculations for proposition 2 to consider overlapping partitions of features.
## On Using the Replica Trick
We do in the remarks made directly after the statement of our main result in the main text state that the replica trick is a “non-rigorous but standard” approach. We make no claim that our results are fully rigorous, but because the results obtained from the replica trick have been demonstrated to coincide with rigorous results for many similar problems and our analytical results show excellent agreement with numerical simulations, we have good reason to expect that our result is correct. However, In light of the concerns raised here we see it fit to change the statement of our main result to further emphasize that the replica trick is not rigorous, and to establish a rigorous basis for our main result.
In the case where the data covariance matrix has a bounded spectrum, we believe that our results may be obtained through a clever special case of the rigorous result of Loureiro [2022]. While their derivation does not explicitly consider ensembles with variation in the number of features viewed by each ensemble member, this type of heterogeneity may be added in post-hoc by choosing data covariance matrices which “zero out” a number of features which varies over the ensemble. If accepted, we will add a supplemental section discussing this correspondence and including a detailed derivation of our result from the general result of Loureiro [2022]. We will also update the “proof” of the main theorem to the following:
“We calculate the terms in the generalization error using the replica trick, a standard but non-rigorous method from the statistical physics of disordered systems. The full derivation may be found in the SI. In the special case where the covariance matrices $\Sigma_s, \Sigma_0$” have bounded spectrum, this result may be obtained as a clever special case of the results of Loureiro [2022] (see SI for derivation).”
We will also add the following remark after the proof of proposition 2:
“Note that, as in this case $\Sigma_s$ does not have a bounded spectrum, this result does not follow from the rigorous results of Loureiro [2022]. However, we find excellent agreement between theory and experiment when data dimension is sufficiently large.”
We emphasize that, even as our general result may be recovered as a special case of the results of Loureiro [2022], simplifying this general result in the special case of subsampling from globally correlated features is a very tedious calculation which requires significant work. Our investigation of this general result also differs entirely from Loureiro [2022], which considered only ensembles with the same number of readout weights across ensemble members, and included no study of feature noise or readout noise.
---
Rebuttal Comment 1.1:
Title: Requesting further justification of readout error
Comment: I appreciate the authors' detailed response and revision proposals. In general, I like the revisions that you have suggested, and am inclined to increase my score. In particular, I think it would be very good to add the proof via special case of Loureiro [2022] for bounded operators. However, I am still not satisfied with the justification of readout noise, which seems to be a significant part of the settings in which these ensembles are worth using (e.g., Figure 4).
The authors have given justification of readout noise from dropout or biological neural networks, which is somewhat reasonable in my opinion if there is no feature subsampling (i.e., all members of the ensemble use the same features). However, I struggle to imagine a concrete setting in which feature subsampling and readout noise would occur together. Do the authors have a better example of where I could expect to see both together, where a bunch of models with readout noise would be trained on different features? The artificial and biological neural network justification seems weak here, since such models are rarely trained on subsets of features, not to mention they are not linear.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read and consider our rebuttal. One concrete setting in which feature subsampling and readout noise necessarily co-exist is in biological neural circuits for recognition of visual stimuli. We clarify that in this setting, we conceptualize the features as representations in the visual cortex (not as the features of the raw data) from which noisy downstream neurons can access only a subset of the millions of relevant “feature neurons.” In this case, each member of the ensemble of readouts would sample from a different subset of the available features.
Similarly, in artificial neural networks, we may conceptualize features not as pixels of the raw inputs, but as the dimensions of a feature map, from which readout weights are learned during fine-tuning. This is the setting that we explore in figure S1, where feature subsampling ensembles are trained on the neurons of the top hidden layer of a pre-trained deep network. Here there is no readout noise, but we still see a benefit to the feature-subsampling ensemble with heterogeneous input dimensionality in the absence of regularization, though we agree that this approach isn’t necessarily worth using when computational resources permit a hyper-parameter search for the optimal regularization. We will make this clear by adding a direct comparison with predictions from readouts trained with “optimal regularization” in figure 3c and figure S1.
Readout noise and feature subsampling can also coexist in any physical neural network with sparse connectivity. While feature subsampling may not currently be widely employed in this context, our results suggest that it should be used for any neural network which is “noise-dominated”— a notion we have made precise the special case of equicorrelated features (see global rebuttal). | null | null | null | null | null | null |
Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy | Accept (poster) | Summary: This paper studies data pruning in noisy label setting. Built on re-labeling models, it proposes Prune4ReL that finds a subset to maximize the re-labeling accuracy. In particular, it introduces neighborhood confidence as the criteria for selection, as well as a greedy algorithm to select the subset. Evaluation show its superiority over prior data pruning methods.
Strengths: The target setting is interesting and well motivated. The proposed neighborhood confidence and Prune4ReL is sound with theoretical guarantee. In addition, the evaluations are comprehensive and experimental results look promising.
Weaknesses: I don't see any major weakness, but I do have some comments:
1. The proposed data pruning method does require model training as many sample selection methods for robust learning do; I think these methods should also be considered as baselines and compared, even though they are not specifically designed for data pruning. Currently, the authors only compare SmallLoss, which is quite outdated.
2. There are some related work in robust learning field that also leverage neighborhood information, but the authors did not include and discuss them, eg, "Learning with Neighbor Consistency for Noisy Labels"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors clearly stated the limitations in appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1. The proposed data pruning method does require model training as many sample selection methods for robust learning do; I think these methods should also be considered as baselines and compared, even though they are not specifically designed for data pruning.`
Thank you very much for helping us improve our paper. Many sample selection methods, including the small-loss trick, have been developed for the purpose of removing noisy examples from the training set. However, these methods do not consider the importance of examples in terms of re-labeling. Thus, a new data pruning method should ***not*** completely remove noisy examples but select them if they are expected to be relabeled correctly. We expect that other sample selection methods would also perform worse than ours because they do not try to select noisy examples which could be potentially helpful. According to your suggestion, we will try to add more decent sample selection baselines such as Co-teaching+[a] in the final version.
[a] Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama: How does Disagreement Help Generalization against Label Corruption? ICML 2019: 7164-7173
`Q2. There are some related work in robust learning field that also leverage neighborhood information, but the authors did not include and discuss them, e.g., "Learning with Neighbor Consistency for Noisy Labels"`
Thank you for bringing up an important topic for discussion. The method NCR[b] that you mentioned utilizes consistency regularization among neighbors so that examples with similar feature representations produce similar outputs. Consequently, NCR and Prune4Rel share the phiolosophy that neighbor examples are useful in the presence of label noise. Nevertheless, the objectives of the two methods to utilizing the neighborhood are distinct: NCR aims to reduce the impact of incorrect labels, whereas Prune4Rel seeks to determine the contribution of an example to the re-labeling accuracy. The final version will include this discussion.
[b] Ahmet Iscen, Jack Valmadre, Anurag Arnab, Cordelia Schmid: Learning with Neighbor Consistency for Noisy Labels. CVPR 2022: 4662-4671
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I read the author's rebuttal and would keep my score. In addition, I think this work is also related but missed in discussion/comparison: Resolving Training Biases via Influence-based Data Relabeling.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for suggesting an additional reference. We will definitely include it for discussion or comparison purposes in the final version.
Title: Thank you | Summary: This paper studies the task of data pruning, specifically in the setting of label noise. The authors propose a method to perform data pruning by maximizes the total neighborhood confidence of the training examples (which is equivalent to maximizing the relabeling accuracy).
The authors theoretically analyze this particular setting, bounding the error of a model trained on the subset that satisfies the expansion and separation assumptions from prior work. Their resulting bound is inversely proportional to the neighborhood confidence.
Their empirical results show mixed results in the comparison against existing methods; namely, in Table 1, the results are slightly better than existing baselines on a small number of tasks but predominantly match the performance of existing methods. In table 2, it also seems that the existing method Forget and kCenter seem to select similar or fewer noisy examples than the proposed method (except in the case of CIFAR-100N with 0.2 or 0.4).
Strengths: The authors present a theorem that shows that the error of a model trained on a subset of data is inversely related to the neighborhood confidence. They propose a method that maximizes the neighborhood confidence, which in turn minimizes this bound.
The authors show in Figure 3 that the test accuracy with SOP increases when using a subset produced by the proposed method. However, it is unclear why this is necessarily the case, since there are similar or fewer noisy examples selected by the Forget baseline method.
The authors provide ablation studies that study the effect of neighborhood threshold $\tau$ and with different confidence metrics.
Weaknesses: The bolding strategies in Table 1 are a bit misleading; in many scenarios, a baseline achieves the same performance as the proposed method (and sometimes achieves a smaller variance), but the proposed method is still listed in bold. In fact, the reported method achieves the same performance across many of the tasks.
Similarly, the results in Table 2 show that the Forget baseline seems to select subsets with a smaller ratio of noisy examples when compared to the proposed method. However, in subsection 4.3, the authors claim that this method selects a high ratio of noisy examples.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1. Their empirical results show mixed results in the comparison against existing methods. In Table 1, the results are slightly better than existing baselines on a small number of tasks but predominantly match the performance of existing methods`
Thank you very much for your careful review. We acknowledge your concerns but respectfully argue that **our empirical results are promising** in three ways.
> **(1) Prune4Rel *consistently* outperforms in various datasets and selection ratios.** No baseline consistently shows comparable accuracy in Table 1. To illustrate our claim, the average over different selection ratios for each dataset with SOP+ is calculated as follows.
| Methods | CIFAR-10N Random | CIFAR-10N Worst | CIFAR-100N Noisy |
| -------------- | -------- | -------- | -------- |
| Uniform | 91.8 | 88.0 | 56.9 |
| SmallL | 87.2 | 86.1 | 59.6 |
| Margin | 79.9 | 71.2 | 42.0 |
| kCenter | 92.0 | 88.4 | 56.9 |
| Forget | 91.2 | 85.4 | 55.6 |
| GraNd | 64.2 | 44.7 | 32.7 |
| **Pr4Rel** | 92.7 | 89.2 | 59.5 |
| **Pr4Rel$_B$** | **92.8** | **89.3** | **60.8** |
> **(2) The improvement becomes more pronounced when the selection ratio is *relatively low* (e.g., 20% in Table 1).** It is relatively hard to show the difference at high selection ratios, because not many examples are pruned. Table 1 shows that slightly better or comparable accuracy tends to appear at high selection ratios.
> **(3) The results on another dataset reaffirm the superiority of Prune4Rel.** The table in *Q2 of the reviewer jBWT* is obtained for ImageNet-1K with synthetic flip noises.
Overall, we hope that the reviewer's concerns on empirical results are resolved by our three supporting arguments. We will enrich Table 1 by adding the averages and new results in the final version.
`Q2. In Table 2, it seems that the existing method Forget and kCenter seem to select similar or fewer noisy examples than the proposed method (except in the case of CIFAR-100N with 0.2 or 0.4).`
The goodness of a selected subset cannot be determined based solely on the proportion of noisy examples; the **quality** (or **self-correctability**) of these noisy examples must also be considered. Noisy examples selected by Prune4Rel are mostly self-correctable because it maximizes the total neighborhood confidence of the training set. In contrast, those selected by existing data pruning methods such as kCenter and Forget are ***not*** guaranteed to be self-correctable. To contrast the proportion of self-correctable examples in kCenter, Forget, and Prune4Rel, we plan to expand Table 2 as follows. Here is the proportion of self-correctable examples among the noisy ones within a selected subset at a selection ratio of 0.2.
| Model | Methods | CIFAR-10N Random | | |
| ----- | ---------- | -------- | -------- | -------- |
| | | Test Accuracy | % Noisy | % (Self-Correct / Noisy) |
| SOP+ | kCenter | 86.3 | 19.0 | 75.2 |
| | Forget | 82.4 | 17.0 | 61.7 |
| | **Pr4Rel** | 88.2 | 17.0 | **90.3** |
Therefore, the noisy examples selected by kCenter and Forget may harm the generalizability of a model, whereas those selected by Prune4Rel are rather useful for training.
In Section 4.3, we meant to say that Prune4Rel selects more high-quality noisy examples aggressively as the subset size increases, based on the increased confidence. We recognize that the current writing is not very clear, and we will revise it accordingly.
`Q3. The authors show in Figure 3 that the test accuracy with SOP+ increases when using a subset produced by the proposed method. However, it is unclear why this is necessarily the case, since there are similar or fewer noisy examples selected by the Forget baseline method.`
This question is answered by the **response to Q2**. The contribution of noisy examples to a model's relabeling capability is low in kCenter and Forget, whereas it is very high in Prune4Rel, despite the fact that the proportion of noisy examples is comparable.
`Q4. The bolding strategies in Table 1 are a bit misleading; in many scenarios, a baseline achieves the same performance as the proposed method (and sometimes achieves a smaller variance), but the proposed method is still listed in bold.`
We apologize for any confusion that our current presentation may have caused. We intended to highlight only our methods, unless they were inferior to any baseline. We will revise the strategy for bolding such that the highest values (including ties) are highlighted. For example, part of Table 1 for SOP+ will be changed to the following.
| Methods | | CIFAR-10N | Random | | | CIFAR-10N | Worst | | | CIFAR-100N | Noisy | |
| -------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| | 0.2 | 0.4 | 0.6 | 0.8 | 0.2 | 0.4 | 0.6 | 0.8 | 0.2 | 0.4 | 0.6 | 0.8 |
| SmallL | 77.6 | 86.2 | 90.7 | 94.3 | 78.8 | 84.1 | 89.3 | 92.3 | 48.5 | 59.8 | 63.9 | **66.1** |
| Margin | 52.1 | 79.6 | 92.6 | 95.1 | 45.7 | 61.8 | 84.6 | **92.5** | 20.0 | 34.4 | 50.4 | 63.3 |
| kCenter | 86.3 | 92.2 | 94.1 | **95.3** | 81.9 | 88.0 | 91.3 | 92.3 | 44.8 | 55.9 | 61.6 | 65.2 |
| **Pr4Rel** | 88.2 | 93.0 | **94.4** | 95.1 | 83.4 | **89.3** | **91.5** | **92.5** | 49.0 | 59.1 | **64.1** | 65.7 |
| **Pr4Rel$_B$** | **88.6** | **93.1** | 94.2 | **95.3** | **84.2** | 89.1 | 91.3 | **92.5** | **52.9** | **60.1** | **64.1** | **66.1** |
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thanks for your response. I appreciate the changes for the bolding in Table 1 and the clarification for the distinction between noisy and self-correctable datapoints, as well as the additional results on synthetic flips for ImageNet-1k.
I'm happy to increase my score to a 5 but am not wholly convinced by the empirical results.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: Thank you very much for your positive feedback! We are more than happy to answer any additional questions during the discussion period. Moreover, we will keep polishing the evaluation section so that our contribution can be more clearly delivered. | Summary: The paper proposes a novel data pruning algorithm, Prune4ReL, that maximizes the neighborhood confidence of the entire training examples, which is proportional to the likelihood of correct re-labeling. The paper demonstrates the effectiveness of Prune4ReL on four noisy datasets, where it outperforms baselines by a large margin.
Strengths: The writing of this article is very clear and easy to follow up. Moreover, the methodology in this paper is also reasonable with necessary theoretical analysis. I enjoy this work very much! My only concern about this work is the experimental part.
Weaknesses: 1. Recent works like [1,2,3] are missing in those selected baselines.
2. Lack of results on ImageNet-1K, which is the most convincing part for us.
[1] Moderate: Moderate coreset: A universal method of data selection for real-world data-efficient deep learning. ICLR-2023
[2] SSP: “Beyond neural scaling laws: beating power law scaling via data pruning. NeurIPS-2022
[3] CCS: Coverage-centric Coreset Selection for High Pruning Rates. ICLR-2023
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In addition, the current dataset-pruning experiment mainly focuses on image classification, but this kind of task is too simple to reflect the real needs. For example, for CIFAR or even ImageNet, we don't need dataset-pruning at all, and the cost of training on these datasets is very affordable for almost all scientific teams. I think the real application scenario of dataset-pruning should be the most popular tasks such as large language model (LLM) training and multi-modal training. So, what do you think is the main difficulty of using dataset-pruning for these tasks? How is these tasks different from image classification?
At present, the performance of dataset-pruning algorithm in image classification tasks is generally not very good, for example, the performance degrades very seriously under high pruning-ratio Settings. Moreover, all existing pruning algorithms cannot outperform the ramdom selection baseline with a significant margin. However, in the field of LLM, as a contrast, some manually filtered datasets [1,2] can achieve a data set size reduction of tens of times, and performance remains unchanged or even significantly improved. So, can we say that manual filtering is still far better than algorithmic automated filtering? Or, what in the world is causing such a big gap.
[1]. Textbooks Are All You Need. Arxiv. [2306.11644]
[2]. LIMA: Less Is More for Alignment. Arxiv. [2305.11206]
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `The writing of this article is very clear and easy to follow up. Moreover, the methodology in this paper is also reasonable with necessary theoretical analysis. I enjoy this work very much!`
> We are very glad to hear that you enjoy reading our paper.
`My only concern about this work is the experimental part.`
> We did our best to improve the evaluation during the rebuttal period, and we hope that our efforts will address your sole concern.
`Q1. Recent works like [a,b,c] are missing in those selected baselines.`
Thank you very much for recommending important references. We have additionally conducted the experiments with Moderate[a] and SSP[b], and the results with SOP+ are reported as below. **Prune4Rel is also shown to outperform these recent baselines**, because they are not designed for the **noise-robust** learning scenario. We will add these results (also for other datasets) in the final version.
| Methods | | CIFAR-10N | Random | | | CIFAR-10N | Worst | |
| -------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| | 0.2 | 0.4 | 0.6 | 0.8 | 0.2 | 0.4 | 0.6 | 0.8 |
| Uniform | 87.5 | 91.5 | 93.4 | 94.8 | 81.9 | 87.5 | 90.8 | 91.8 |
| SmallL | 77.6 | 86.2 | 90.7 | 94.3 | 78.8 | 84.1 | 89.3 | 92.3 |
| Margin | 52.1 | 79.6 | 92.6 | 95.1 | 45.7 | 61.8 | 84.6 | **92.5** |
| kCenter | 86.3 | 92.2 | 94.1 | **95.3** | 81.9 | 88.0 | 91.3 | 92.3 |
| Forget | 82.4 | 93.0 | 94.2 | 95.0 | 71.1 | 87.7 | 90.6 | 92.2 |
| GraNd | 24.2 | 51.6 | 85.9 | 94.9 | 15.4 | 25.7 | 51.0 | 86.8 |
| **Moderate** | 88.3 | 92.8 | 94.1 | 94.6 | 75.0 | 81.9 | 87.7 | 91.8 |
| **SSP** | 80.5 | 91.7 | 93.8 | 95.0 | 70.8 | 86.6 | 89.2 | 92.3 |
| **Pr4Rel** | 88.2 | 93.0 | **94.4** | 95.1 | 83.4 | **89.3** | **91.5** | **92.5** |
| **Pr4Rel$_B$** | **88.6** | **93.1** | 94.2 | **95.3** | **84.2** | 89.1 | 91.3 | **92.5** |
We were unable to add CCS[c] within a week, but the results for CCS will be included in the final version.
[a] Xiaobo Xia, Jiale Liu, Jun Yu, Xu Shen, Bo Han, Tongliang Liu: Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning. ICLR 2023
[b] Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari Morcos: Beyond neural scaling laws: beating power law scaling via data pruning. NeurIPS 2022
[c] Haizhong Zheng, Rui Liu, Fan Lai, Atul Prakash: Coverage-centric Coreset Selection for High Pruning Rates. ICLR 2023
`Q2. Lack of results on ImageNet-1K, which is the most convincing part for us.`
Thank you very much for helping us improve our paper. All datasets used for our experiments, CIFAR-10N, CIFAR-100N, WebVision, and Clothing-1M, have **real** label noises. However, for ImageNet-1K, we could not find its variant that contains real label noises. Thus, we have injected synthetic noises into ImageNet-1K. In detail, we flipped the original label (e.g., $classId = 0$) to an incorrect label (e.g., $classId = 1$) randomly by a given noise rate. The results for ImageNet-1K are provided as below and will be included in the final version. Here, the noise rate is set to be 20%, and SOP+ is trained for 50 epochs from scratch with a batch size of 64. Prune4Rel is shown to perform the best also in this dataset, thereby demonstrating its versatility. (More results for high selection ratios will come during the discussion phase.)
| Methods | ImageNet-1K | 20% Noise | |
| --------------- | ------- | -------- | -------- |
| Selection Ratio | 0.01 | 0.05 | 0.1 |
| Uniform | 2.6 | 27.8 | 42.5 |
| SmallL | 5.8 | 22.8 | 31.4 |
| Forget | 0.8 | 4.1 | 8.3 |
| **Pr4Rel$_B$** | **6.0** | **30.2** | **44.3** |
`Q3. I think the real application scenario of dataset-pruning should be the most popular tasks such as large language model (LLM) training and multi-modal training. So, what do you think is the main difficulty of using dataset-pruning for these tasks? How is these tasks different from image classification?`
This is an excellent question. We agree with you that data pruning for multi-modal training and large language model (LLM) training will be more interesting than for image classification. Regarding LLM training, we believe that there are several challenges, compared to image classification. First, various levels---token, sentence, and document---can be considered as a pruning granularity for a large-scale corpus. Second, efficient selection criteria should be developed for LLMs, because most of the metrics typically used for image classification, e.g., gradients, are very expensive to calculate. Obviously, there will be additional challenges to investigate. After the review period concludes, we, the authors and the reviewer, can even attempt to collaborate on this extremely intriguing topic.
`Q4. Can we say that manual filtering is still far better than algorithmic automated filtering? Or, what in the world is causing such a big gap?`
Again, thank you for posing a very interesting research question. In our opinion, the quality, diversity, and redundancy of a given raw dataset are more important in determining the efficacy of data pruning than whether it is performed automatically or manually. (The capacity of a pre-trained model is also relevant given that [d] involves data pruning for fine-tuning.) Clearly, additional research is required to provide an exact answer to this question.
[d] Chunting Zhou et al.: LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023)
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: Congrats!
I am very satisfied with your reply! This is a good article and very enlightening! I have a habit of giving conservative marks in the first stage review! I hope it will not affect you badly.
---
Reply to Comment 1.1.1:
Title: Thank you and more results on ImageNet-1K
Comment: We are happy to hear that you are very satisfied with our response. Also, even though the experiment for ImageNet-1K is not yet complete, we would like to update the progress using the interim results at this time. Prune4Rel is shown to maintain its dominance also in this dataset.
| Methods | ImageNet-1K | 20% Noise | | | |
| --------------- | ------- | -------- | -------- | -------- | -------- |
| Selection Ratio | 0.01 | 0.05 | 0.1 | 0.2 | 0.4 |
| Uniform | 2.6 | 27.8 | 42.5 | 52.7 | 59.2 |
| SmallL | 5.8 | 22.8 | 31.4 | 42.7 | 54.4 |
| Forget | 0.8 | 4.1 | 8.3 | 50.6 | 57.2 |
| **Pr4Rel$_B$** | **6.0** | **30.2** | **44.3** | **53.5** | **60.0** |
Please let us know if you have any additional questions. Again, your encouragement is greatly appreciated. | Summary: The paper proposes Prune4ReL, which prunes a noisy training dataset such that the performance of a Re-labeling trained downstream model is maximized.
Unlike previous work, the paper targets pruning a *noisy* dataset and explicitly considers the learning algorithm of the downstream model. The proposed method is specifically tailored to Re-labeling, automatically cleaning the noisy training dataset.
The utility function in Prune4ReL is inspired by the theory proposed in [1] and gracefully incorporated into the context of data pruning.
The experimental results show that most prior work suffers under noisy datasets, while Prune4ReL remains robust to the noise in the training dataset, resulting in substantial improvements over baselines.
[1] Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data, ICLR2021
Strengths: The strength of the paper include
- Clear presentation and easy-to-follow writing
- The proposed method is theoretically-inspired and, maybe more importantly, easy to implement
- The evaluation, together with the analysis, is convincing.
- Include the analysis on why and how the baseline fail
The paper is well-organized, and the method and evaluation are solid.
Weaknesses: The paper conducts a complete study on the proposed Prune4ReL, and the following weakness is relatively minor.
- Prune4ReL outperforms the baselines by a large margin, but the gap between PruneReL and uniform sampling is small.
- Some notations are not clear.
- Def. 3.1: the $x$ is not in the pixel space. Instead, it's in the embedding space. The author should state the dimension of $x$ at the beginning
- in Thm 3.4, it should be $\mathcal{S} \subseteq \tilde{D}$
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Experiment
- Uniform sampling is usually the second-best baseline. It performs especially well in CIFAR-10N Random/Worst and Clothing-1M. Can the authors elaborate more on this?
- Do the authors believe the gap between uniform sampling and Prune4ReL is small due to the greedy sampling? Or what are the other hypothesis?
- The derivation from the reduced neighbourhood confidence (Eq. 3) to the empirical reduced neighbour confidence (Eq.4) is confusing. What motivates using the cosine distance to perform a weighted sum? Is this motivated by importance sampling?
- WebVision and Clothing-1M are crawled from the web, but Prune4ReL seems to work better in Clothing-1M. Can the authors elaborate on this further?
- It seems that kCenter and GraNd are missing in Figure 3b. The authors have to find a better way to visualize this plot (Figure 3b). Try to zoom in a little bit.
- Section 4.5 is like the motivation of the paper. Consider re-order them.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: As mentioned in the #weakness, there are some minor limitations.
- Empirical results: The proposed method still requires some work on improving over uniform sampling baseline, and the authors did not justify the strong uniform baseline.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We sincerely appreciate the reviewers' constructive comments and positive feedback on our manuscript.
`Q1. Uniform sampling is usually the second-best baseline. It performs especially well in CIFAR-10N Random/Worst and Clothing-1M. Can the authors elaborate more on this?`
To improve the performance of Re-labeling models, sample selection should achieve a balance between easy (clean) and hard (possibly noisy) examples. This is because the easy (clean) examples support the correct relabeling of neighbor hard examples, while the hard examples greatly boost the test accuracy if they are re-labeled correctly. Existing data pruning algorithms that prefer hard examples may perform poorly due to a lack of easy examples that support accurate relabeling, and clean sample selection algorithms that prefer easy examples may also perform poorly due to a lack of hard (informative) examples that aid in test accuracy after relabeling. On the other hand, uniform sampling shows somewhat robust performance by selecting easy and hard examples in a balanced way.
For CIFAR-10N, as shown in the selection ratio of 0.2 in Table 1, uniform sampling was more effective at a small selection ratio because a proper amount of easy (clean) examples was collected to support accurate relabeling. However, looking at the selection ratio of 0.8, uniform sampling tended to show lower performance than other data pruning baselines since selecting hard examples became more important as there existed already proper amounts of easy examples. For Clothing-1M, we conjecture that uniform sampling performed well because a low selection ratio was used for testing the fine-tuning. Overall, we will add this discussion in the final version.
`Q2. The derivation from the reduced neighborhood confidence (Eq. 3) to the empirical reduced neighborhood confidence (Eq.4) is confusing. What motivates using the cosine distance to perform a weighted sum? Is this motivated by importance sampling?`
The reduced neighborhood confidence (Eq. 3) of each example is calculated as the sum of the confidences in its neighborhood. However, identifying the neighborhood of an example is very expensive and thus practically infeasible. Thus, instead of explicitly finding the neighborhood of each example, the *empirical* reduced neighborhood confidence (Eq. 4) approximates it using the cosine similarity **as the likelihood of belonging to the neighborhood**. We will elaborate on this rationale in the final version.
`Q3. WebVision and Clothing-1M are crawled from the web, but Prune4ReL seems to work better in Clothing-1M. Can the authors elaborate on this further?`
We interpret your question to mean that Prune4Rel achieves higher accuracy at lower selection ratios in Clothing-1M. Following the existing literature[a] for Clothing-1M, we used a ResNet model pretrained on ImageNet, as specified in Section 4.1. Thus, **fine-tuning was tested for Clothing-1M**, and this setting explains why high accuracy was achieved using a small portion of the Clothing-1M training set. Please inform us if we misunderstood your question.
[a] Sheng Liu, Zhihui Zhu, Qing Qu, Chong You: Robust Training under Label Noise by Over-parameterization. ICML 2022: 14153-14172
`Q4. kCenter and GraNd are missing in Figure 3b. The authors have to find a better way to visualize this plot (Figure 3b). Try to zoom in a little bit.`
Thanks for pointing out this issue. The enhanced plot is included in the pdf file for the global response. Since *kCenter* requires huge computation and memory costs, it is not feasible to run in our environment (refer to Figure 3c). Also, since *Forgetting* requires prediction history during the warm-up training period to calculate the forgetting score of each training example, we could not run it for Clothing-1M where sample selection is performed from the pre-trained model.
`Q5. Section 4.5 is like the motivation of the paper. Consider re-ordering them.`
Thank you very much for helping us improve our paper. As you propely recognize, utilizing Re-labeling models for data pruning under label noise is much more effective than utilizing standard models. Thus, we will reorder Section 4.5 to Section 4.3 in order to confirm the motivation early on. In addition, we will add a summary of Section 4.5 at the end of Introduction: "When combined with the pruning methods, the performance of the Re-labeling models significantly surpasses that of the standard models in CIFAR-10N and CIFAR-100N by up to 21.6%."
`Q6. Some notations are not clear.`
> In Definition 3.1, the dimensionality of $x$ is set to be 512 for CIFAR-10N and CIFAR-100N and 2048 for WebVision and Clothing-1M. We will include this detail in the final version.
> In Theorem 3.4, we will correct the notation to $\mathcal{S} \subseteq \tilde{\mathcal{D}}$. Thank you for pointing out the typo.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Great work! The authors addressed all my concerns and please include the discussion of the uniform sampling baseline in the final version. I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad to hear that you are satisfied with our response and will surely incorporate this discussion into the final version. Thank you again for your support and insightful feedback. | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewers' positive feedback and valuable comments. Most reviewers agreed that (1) **the problem setting and methodology are reasonable and novel**, (2) **the theoretical analysis of the methodology is sound**, and (3) **the evaluation was performed extensively**. Because the reviewer's comments are mostly about the experiments, **we have significantly improved (and are improving) the evaluation section** during the rebuttal period by adding two baselines, one dataset, and one metric. (See the attached PDF file for an enhanced plot.) Therefore, we believe that the superiority of Prune4Rel over other baselines has demonstrated much more clearly and hope that the remaining concerns are addressed by the rebuttal. We are happy to answer additional questions during the discussion period.
Pdf: /pdf/b59a65d226e25b29e61b3f2b5056d4187514fd0c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fair Graph Distillation | Accept (poster) | Summary: This paper proposes fair graph distillation (FGD), as an advanced graph distillation approach to generate fair distilled graphs. FGD focuses on the group fairness issue in graph distillation methods and aims to generate fair distilled graphs with respect to sensitive attributes for nodes. This paper proposes a simple yet effective metric for measuring the bias in representation space, namely coherence, for distilled graphs, and a bi-level optimization framework to generate fair graph distillations iteratively. Experimental results illustrate that the proposed methods can achieve performance-fairness trade-offs across various datasets and architectures.
Strengths: - This paper considers group fairness issue, which is novel in graph distillation.
- The proposed metric for bias measurement is simple yet convincing.
- The theorems and algorithms introduced in paper is well presented.
- Experimental results show the effectiveness of FGD in debiasing distillation.
Weaknesses: - More baselines in graph distillation are needed. For example, *FairGNN* and *EDITS* are introduced as baselines in debiasing, while no graph distillation methods (such as [1]) are compared.
- Results in Table 1 shows that distilled graphs could achieve some improvements in debiasing, while the AUC performance drops significantly (e.g. from 94% to 70%), which is unacceptable for node classification.
[1] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. Dataset Condensation with Gradient Matching. In ICLR 2021
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Is FGD capable of handling muti-class cases? In the paper authors mentioned to use average coherence to optimize the distilled graph, yet there are no experiments on this.
- What is the computational complexity of the overall algorithm?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: - The distilled graphs can't achieve comparable performance compared with original graphs.
- The presented method is only tested in binary classification scenario.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 1. More baselines in graph distillation are needed.
The vanilla baseline we utilized in our work is derived from the graph data condensation method introduced in [2] tailored for graph data using gradient matching, which is the same as [1]. Thus, [1] can not be directly adopted in graph data due to the lack of distilled adjacency matrix. We only choose [2] as our only vanilla graph data distillation baseline.
[1] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. Dataset Condensation with Gradient Matching. In ICLR 2021
[2] Jin, Wei, et al. "Graph condensation for graph neural networks." ICLR 2022.
# 2. Utility performance drop.
Thanks for this great point. We have meticulously tuned the vanilla distillation method for these datasets, taking into consideration the reduction rate. In the original experiments, we used a reduction rate of 0.08 (meaning that the number of nodes in the distilled graph is 8% of the original graph) for all datasets. The small distilled graph leads to significantly worse utilities. Therefore, in the new experiments, we increase the reduction rate to 0.32 and 0.16 to increase the accuracy performance for the Pokec-n and Recidivism datasets, respectively. We report the **updated results on Pokec-n and Recidivism datasets** as follows.
| | | GCN | |
|:----------:|:----:|:-------:|:----:|
| | | Vanilla | FGD |
| | ACC↑ | 62.16 | 61.84 |
| | AUC↑ | 65.82 | 64.59 |
| Pokec-n | F1↑ | 57.38 | 58.62 |
| | DP↓ | 5.83 | **0.84** |
| | EO↓ | 6.34 | **1.94** |
| - | - | - | - |
| | ACC↑ | 78.18 | 77.51 |
| | AUC↑ | 75.83 | 75.84 |
| Recidivism | F1↑ | 63.84 | 61.98 |
| | DP↓ | 5.25 | **1.34** |
| | EO↓ | 3.49 | **1.98** |
1. **Utility Improvement**: The utility performance of the model improves when trained on a larger distilled graph but exhibits worse fairness performance.
2. **Utility Gap Attribution**: The utility gap observed is attributable to the distillation method itself.
3. **Fairness Issue Persistence**: Even when the utility performance is subpar, the fairness problem still exists and is a concern.
4. **Our Method's Advantage**: Compared to the vanilla distillation method, our Fair Graph Distillation (FGD) enhances fairness performance while maintaining or even improving utility performance.
# 3. What is the computational complexity of the overall algorithm?
We provide the time and space complexity analysis as follows:
1. **Forward Process of GCN**:
- On the original graph time and space complexity: $O\left(r^L N d^2\right)$ and $O\left(r^L N d + N^2\right)$, where $r$ is the number of sampled neighbors per node, $L$ is the number of layers, and $N$ and $d$ are the number of nodes in real data and hidden units, respectively.
- On the condensed graph time and space complexity: $O\left(L N^{\prime 2} d + L N' d\right)$ and $O\left(L N^{\prime 2} + L N' d\right)$, where $N'$ the number of nodes in distilled data .
2. **Backward Propagation**:
- The time complexity of calculating the second-order derivatives is an additional $O\big(\left|\boldsymbol{\theta}\right| N'(N'+d)\big)$, where $\left|\boldsymbol{\theta}\right|$ is the number of parameters in $\theta$. The space complexity is $O( |\boldsymbol{\theta}|^2 + N'(N'+d))$.
3. **Coherence Loss**:
- The time and space complexity are $O\left(L N^{\prime 2} d \right)$ and $O\left(L N^{\prime 3} d \right)$, respectively.
We will add this complexity analysis in the revised manuscript.
# 4. The distilled graphs can't achieve comparable performance compared with original graphs.
We have achieved comparable performance compared with the original graphs via tuning the reduction rate. Please see #2 for detailed response.
# 5. The presented method is only tested in binary classification scenario?
Thank you for noting this limitation. Extending our approach to categorical, multiple, or continuous sensitive attributes indeed presents new challenges. We have discussed such an extension in **Appendix H.1** and leave it for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clear explanation and additional experimental results. This is an insightful work and I will change my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer jsrW,
I want to extend our heartfelt thanks for recognizing what we've been working on. Your thoughtful advice and encouraging feedback have truly helped us take our work to the next level.
Best regards,
Authors | Summary: This paper aim to address the issue of fairness in graph data distillation, a process that condenses large real graphs into smaller distilled versions for more manageable computation with GNNs. They proposed FGD by introducing a new bias metric called coherence and using a bi-level optimization algorithm, which has shown to provide improved performance-fairness trade-offs in numerous experiments.
Strengths: 1. This paper studies an interesting issue of fairness within distilled graphs, which arises due to the absence of sensitive features.
2. This paper devised a bias measurement named coherence specifically for distilled graphs, and suggests a framework utilizing this metric to facilitate the realization of fair graph distillation. The theoretical analysis in this paper is well founded.
3. The authors have conducted a thorough experimental analysis, and the presented results indicate that the proposed framework is adaptable to numerous renowned GNNs. This framework improves the trade-off between prediction performance and fairness across a range of datasets, signifying the framework's effectiveness and wide-ranging applicability.
Weaknesses: 1. Are there any fairness studies in dataset distillation in other fields such as computer vision and natural language processing? If so, there is a lack of discussion comparing the proposed framework with other similar works.
2. The time and space complexity are not mentioned.
3. In terms of methodology, this paper primarily uses strategies that were already developed before, and the unique contribution is the incorporation of a new loss term, Overall, the technical contribution of this work seems incremental.
4. The experimental part is inadequate, lacking integration of the proposed framework with large-scale and advanced GNNs, experiments in large datasets and the extra overhead of proposed framework.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See the weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Whether the proposed framework can be applied to graph-level tasks, not just node-level ones.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 1. Are there any fairness studies in dataset distillation in other fields?
In the existing literature, fairness studies related to dataset distillation in the fields of CV or NLP are not commonly found. The only fair distillation work we can found is [1]n which studies the fairness problem on the data distillation for text classification tasks based on adversarial method.
To our knowledge, our work is the first paper addressing group fairness issues on **graph distillation**, and we focus on graph data due to the unique challenge as graph is non-euclidian data.
We will include a section in the revised manuscript to discuss the novelty of our work considerations to other fields.
[1] Han, Xudong, et al. "Towards Fair Dataset Distillation for Text Classification." (SustaiNLP). 2022.
# 2. The time and space complexity are not mentioned.
We provide the time and space complexity analysis as follows:
1. **Forward Process of GCN**:
- On the original graph time and space complexity: $O\left(r^L N d^2\right)$ and $O\left(r^L N d + N^2\right)$, where $r$ is the number of sampled neighbors per node, $L$ is the number of layers, and $N$ and $d$ are the number of nodes in real data and hidden units, respectively.
- On the condensed graph time and space complexity: $O\left(L N^{\prime 2} d + L N' d\right)$ and $O\left(L N^{\prime 2} + L N' d\right)$, where $N'$ the number of nodes in distilled data .
2. **Backward Propagation**:
- The time complexity of calculating the second-order derivatives is an additional $O\big(\left|\boldsymbol{\theta}\right| N'(N'+d)\big)$, where $\left|\boldsymbol{\theta}\right|$ is the number of parameters in $\theta$. The space complexity is $O( |\boldsymbol{\theta}|^2 + N'(N'+d))$.
3. **Coherence Loss**:
- The time and space complexity are $O\left(L N^{\prime 2} d \right)$ and $O\left(L N^{\prime 3} d \right)$, respectively.
We will add this complexity analysis in the revised manuscript.
# 3. The technical contribution of this work seems incremental.
We respectively disagree with this comment. We clarify that one of the contributions is to find the fairness issue in graph distillation problem, which is interesting and novel. As for our technical contribution (i.e., coherence loss), it is significant and non-trivial. For distilled graph, the sensitive attribute is missing and thus it is challenging to identify the bias for node representation in distilled graph data. To tackle this challenge, we find the geometric connections in data distillation and provide the rationale for sensitive attribute estimation. In this way, the estimated sensitive attribute information can be used to measure the bias for the distilled graph. We believe that the proposed coherence loss is significant and non-trivial.
# 4. Experiments on large-scale and advanced GNNs. The extra overhead of proposed framework.
It is infeasible to conduct the experiment for large-scale graph datasets in a fairness community since there is no public larger graph dataset. The datasets we use in the paper are the most common in the graph fairness field. For large-scale GNN structure, we have conducted additional experiments using **GraphSAINT**, a well-known large-scale GNN method. We also record the 100 epoch training time for the vanilla distillation and FGD:
| | | | GraphSAINT | |
|:----------:|:----:|------|:-------:|:----:|
| | | Real | Vanilla | FGD |
| | ACC↑ | 70.84 | 64.56 | 63.73 |
| | AUC↑ | 78.19 | 70.43 | 71.94 |
| Pokec-z| F1↑ | 70.94 | 67.84 | 66.74 |
| | DP↓ | 9.18 | 7.33 | 2.70 |
| | EO↓ | 10.82 | 6.58 | 2.06 |
| | 100 epoch time | | 59.8s | 143.2s |
| | | | | |
| | ACC↑ | 69.74 | 61.45 | 58.34 |
| | AUC↑ | 75.19 | 62.48 | 60.84 |
| Pokec-n| F1↑ | 69.71 | 53.51 | 50.39 |
| | DP↓ | 3.64 | 5.65 | 3.98 |
| | EO↓ | 1.42 | 2.98 | 2.15 |
| | 100 epoch time | | 55.9s | 125.0s |
| | | | | |
| | ACC↑ | 71.17 | 70.63 | 70.06 |
| | AUC↑ | 68.83 | 56.68 | 55.30 |
| German | F1↑ | 82.34 | 82.11 | 79.83 |
| | DP↓ | 4.74 | 5.25 | 3.53 |
| | EO↓ | 3.57 | 3.49 | 2.62 |
| | 100 epoch time | | 10.8s | 23.6s |
| | | | | |
| | ACC↑ | 80.73 | 77.82 | 78.83 |
| | AUC↑ | 75.18 | 71.59 | 71.82 |
| Credit | F1↑ | 88.38 | 85.15 | 86.04 |
| | DP↓ | 8.86 | 12.25 | 6.09 |
| | EO↓ | 5.25 | 11.49 | 2.36 |
| | 100 epoch time | | 67.8s | 179.8s |
| | | | | |
| | ACC↑ | 93.37 | 71.38 | 70.72 |
| | AUC↑ | 96.18 | 70.28 | 69.71 |
| Recidivism | F1↑ | 90.63 | 58.94 | 61.63 |
| | DP↓ | 6.15 | 3.49 | 2.93 |
| | EO↓ | 4.53 | 2.48 | 1.94 |
| | 100 epoch time | | 48.4s | 64.9s |
From the results, it is seen that our proposed FGD method maintains competitive utility performance while improving fairness metrics. While there is an increase in training time for FGD compared to the vanilla distillation method, the improvement in fairness metrics demonstrates the effectiveness of our approach in addressing fairness without compromising utility.
# 5. Whether the proposed framework can be applied to graph-level tasks, not just node-level ones.
For graph classification task, to the best of our knowledge, there is not any fairness literature on this task. The reason is that the dataset for graph classification is molecular, and thus there is no sensitive attribute. We have added the discussion on extending our method in link prediction in **Appendix H.3**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. I have raised my score to 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 5Zjc,
Allow me to convey our profound thanks for acknowledging our efforts. Your wise counsel and positive recommendations have unquestionably contributed significantly to enhancing our work's quality.
With kind regards,
Authors | Summary: This paper discovered the fairness problem in the distilled GNN methods and then introduce a fair graph distillation process to generate fair distilled graph data. To propose the algorithm, they also introduce a new bias metric, coherence, and propose a bi-level optimization framework, FGD, for fair graph distillation. Theoretical analysis are provided. Experimental results validate FGD’s effectiveness in mitigating bias while maintaining model utility across various GNN architectures and datasets.
Strengths: 1. Good presentation. This paper is easy and comfortable to read and follow.
2. Solid theoretical analysis and experimental validation.
3. This paper is the first paper to solve the fairness problem in distilled GNN lines, which have good novelty and contribution.
Weaknesses: 1. Some terminology needs simple descirption. It is better to include some preliminary knowledge or terminology description to make paper more self-included.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the full name of FGD? The full needs to be provided when this word first appears.
2. Why the sensitive attribute S is a diagonal matrix, not a vector with the dimension of number of node? Is it for computation convenience?
3. Some terminology needs simple description, such as span space and barycenter.
4. Is this methods only applicable to node classification task? Can it be extended to link prediction or graph classification?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 1. What is the full name of FGD? The full needs to be provided when this word first appears.
The acronym FGD stands for Fair Graph Distillation. We will include the full name in Line 54.
# 2. Why the sensitive attribute S is a diagonal matrix, not a vector with the dimension of number of node? Is it for computation convenience?
We will revise the sensitive attribute $\mathbf{S}$ as $\mathbf{s}$ in Lines 56 & 66 & 169. It should be a vector with the dimension of number of nodes.
# 3. Some terminology needs simple description, such as span space and barycenter.
Thank you for pointing out the need for clarification on specific terminology. We provide the explanation on "span space" and "barycenter" as follows:
- **Span Space**: The span space refers to the geometric space that is spanned by a specific set of vectors. It encompasses all the possible linear combinations of node representations.
- **Barycenter**: The barycenter is referred to as the center of node representations within the same sensitive attribute group. It represents a central point that summarizes the distribution of the data.
We will include these simple descriptions in the revised manuscript.
# 4. Is this methods only applicable to node classification task? Can it be extended to link prediction or graph classification?
Thanks for this great point. Our proposed method primarily focuses on the node classification task since the coherence loss is specifically designed for node-level sensitive attribute information leakage measurement.
For link prediction tasks, it is not easy to extend our method. The main reason is that the fairness definition for link prediction (e.g., **dyadic fairness [1]**) is significantly different from that in node classification. We have added the discussion on extending our method in link prediction in **Appendix H.3**. As for graph classification, to the best of our knowledge, there is not any fairness literature on this task. The reason is that the dataset for graph classification is molecular, and thus there is no sensitive attribute.
[1] Li, Peizhao, et al. "On dyadic fairness: Exploring and mitigating bias in graph connections." International Conference on Learning Representations. 2020. | Summary: This paper focuses on the task of graph distillation (GD) from a fairness perspective. The authors found that current GD method amplifies bias in GNN training compared to training on original graphs. Since the distilled graphs do not contain node attributes, it's intractable to directly apply previous debiasing methods. To address this issue, the authors first made assumptions on the representation space of the distilled graph. Then they propose to measure the bias in the distilled graph representations using the least square distance between the distilled representations and the subgroup representations in the original graph. Technically a variance-based regularization is utilized to punish the model w.r.t. this measure. Extensive experiments demonstrate that the proposed method can benefit GNN training on distilled graphs with improved fairness.
Strengths: 1. This paper studies a novel topic and has its applicability in real-world scenarios. And it might bring in broader impact and more discussions on the characteristics and caveats of data distillation in other domains.
2. The experiments are thorough.
Weaknesses: 1. The authors' design of the coherence loss in section 3.4 is confusing to me. Figure 2 shows that the intuition is to minimize the distance between z0 and z1 (as depicted by the red arcs in the figure), and this makes sense because fair representations should coincide in the attribute dimension. However, the definition in section 3.4 actually is the variance within group 0 or 1, and reducing this variance would only lead to more compact representations in each group, instead of bringing the two groups closer. Since the main contributions of this paper are based on this design, the authors should clarify this in a more rigorous manner.
2. The empirical results on Pokec-z, German and Credit all show that the vanilla GD method can achieve comparable utilities with real graphs (within 5% accuracy gap) and worse fairness. Meanwhile, the results on Pokec-n and Recidivism show significantly worse utilities (up to 20% or 30% gap in accuracy), but better fairness. The improved fairness on these two datasets questions the motivation of this paper, that is the finding that the current GD method worsens GNN fairness. Also, I may wonder whether the authors have tuned the vanilla method right on these two datsets given such a large utility gap.
3. There is not much logical connection between the theorems in sections 3.2 and the following part of the paper. Even if the span space does not match, or the barycenters are not consistent, it doesn't matter with the proposed method. The authors may need to justify the necessity of these theorems.
4. Some typos. In the introduction part, the authors term the bias measurement as *consistency* (line 43), while it's *coherence* in other places. Line 143, it should be projection of z' instead of z. The defition of coherence loss in section 3.4 misses a superscript 2, since without this the formula is constant 0.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See comments above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See comments above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # 1. The authors' design of the coherence loss in section 3.4 is confusing
The confusion stems from a misunderstanding of the coherence loss definition in section 3.4. Here, the function $\mathbf{\pi}^s\left(\boldsymbol{Z}^{\prime}\right)$ represents the probability for **all samples** belonging to the sensitive group $s$. The index $s$ represents the index in a sensitive attribute prediction vector instead of specifying samples. For binary sensitive attribute case, it is necessary to satisfy $\mathbf{\pi}^1\left(\boldsymbol{Z}^{\prime}\right)+\mathbf{\pi}^0\left(\boldsymbol{Z}^{\prime}\right)=1$ for probability prediction. Therefore, the term $\operatorname{Coh}^s\left(\boldsymbol{Z}^{\prime}\right)$ indicates the variance of the probability for **all samples**, not just the variance within group 0 or 1.
We provide a comprehensive statement on how to estimate sensitive attribute and how to integrate such estimation into coherence loss in Sections 3.3 and 3.4, respectively. The intuition behind this design is related to the absence of sensitive attributes in the distilled graphs, which prevents the segregation of synthetic nodes into different sensitive groups.
This forms a key research challenge addressed in our work, and we'll take the reviewer's advice to further clarify this point in the revised manuscript to prevent any confusion. The figure and the related description were meant to simplify the understanding of the concept, and we'll make sure that they are consistent with the more rigorous mathematical description.
# 2. Utility performance gap on Pokec-n and Recidivism dataset.
Thanks for this great point. We have meticulously tuned the vanilla distillation method for these datasets, taking into consideration the reduction rate. We report the **updated results on Pokec-n and Recidivism datasets** as follows. For the Credit dataset, we used a reduction rate of 0.08 (meaning that the number of nodes in the distilled graph is 8% of the original graph), as it's the largest applicable reduction rate for that specific dataset. And we use this for all datasets to control the setting.
We conduct new experiments for the Pokec-n and Recidivism datasets, where we utilized larger reduction rates of 0.32 and 0.16, respectively:
| | | GCN | |
|:----------:|:----:|:-------:|:----:|
| | | Vanilla | FGD |
| | ACC↑ | 62.16 | 61.84 |
| | AUC↑ | 65.82 | 64.59 |
| Pokec-n | F1↑ | 57.38 | 58.62 |
| | DP↓ | 5.83 | **0.84** |
| | EO↓ | 6.34 | **1.94** |
| - | - | - | - |
| | ACC↑ | 78.18 | 77.51 |
| | AUC↑ | 75.83 | 75.84 |
| Recidivism | F1↑ | 63.84 | 61.98 |
| | DP↓ | 5.25 | **1.34** |
| | EO↓ | 3.49 | **1.98** |
1. **Utility Improvement**: The utility performance of the model improves when trained on a larger distilled graph but exhibits worse fairness performance.
2. **Utility Gap Attribution**: The utility gap observed is attributable to the distillation method itself.
3. **Fairness Issue Persistence**: Even when the utility performance is subpar, the fairness problem still exists and is a concern.
4. **Our Method's Advantage**: Compared to the vanilla distillation method, our Fair Graph Distillation (FGD) enhances fairness performance while maintaining utility performance.
# 3. Logical connection between the theorems and the following part of the paper.
The theorem presented in section 3.2 demonstrates the relation between the real data and distilled data, which serves as the rationality of sensitive attribute estimation. Specifically, **the distilled data and the real data reside in a similar semantic space**, which can be supported in Section 3.2 and Appendix D. Consequently, the sensitive estimation of distilled data is determined by the projection onto the orthogonal complement of the subspace spanned by real data from each sensitive group. Our theorem supports the rationality of the proposed estimation method.
# 4. Some typos.
Thank you for your careful review and for identifying these inconsistencies and typos. We sincerely appreciate your suggestion.
1. **Bias Measurement Term**: We acknowledge the discrepancy between the terms "consistency" and "coherence." We will standardize the terminology throughout the paper.
2. **Line 143 Error**: You are correct, and it should indeed be the projection of \(z'\) instead of \(z\). This will be corrected.
3. **Coherence Loss Definition**: The variance calculation in section 3.4 is a typo, and we will revise it.
We will ensure that all of these corrections are implemented in the revised version of the manuscript. Thank you once again for your invaluable input.
---
Rebuttal Comment 1.1:
Comment: I have raised my score to 5. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Rethinking Incentives in Recommender Systems: Are Monotone Rewards Always Beneficial? | Accept (poster) | Summary: The development of online media referral platforms has provided a source of income for media content creators, and the incentive strategies of the platforms may influence the creators' creative trends. The incentive model that tends to reward may also invariably encourage creators to over-serve the majority user group, and the niche groups will be increasingly underserved. In order to solve these problems, this paper designs a backward incentive mechanism, which induces creators' behavior through the game structure to dynamically optimize the creation model and maximize social welfare. And its advantages are verified by simulation experiments.
Strengths: To address the current problem that rewards mechanisms in online content recommendation platforms affect creators' production choices, the platform's content distribution and social welfare. In this paper, we design a reverse reward mechanism, which can guide content creators to optimize their creation strategies and provide locally optimal results for a given welfare metric. This avoids most creators to generate a large amount of homogeneous content that caters to the majority group for the sake of rewards.
1. originality
In this paper, we provide a reverse reward mechanism to address the problem that current reward mechanisms may encourage creators to concentrate on the mass range, thus leaving niche users unserved. The superiority of this mechanism, which is performance-based but discards monotonicity, has been proven in empirical studies, and the approach in this paper is well original.
2. quality
The research problem of this paper is apparent, the method is introduced in detail, the logic is clear and the experiment is reliable. This paper has good quality.
3. clarity
This paper clearly defines the research problem, and introduces the effectiveness of the method from the theoretical and simulation experiments in detail. The overall clarity is good.
4. significance
In this paper, it designs a reverse incentive strategy to cope with the incentive drift caused by the profit-oriented incentive strategy in online content recommendation platforms, so as to reduce the undesirable incentives that cause a large number of creators to ignore the niche groups and leave them unserved. The research in this paper has significant implications.
Weaknesses: This paper’s related work on the content of the current study is weak, there are few references, and there are only a few works in the past five years, which is a bit difficult to explain the novelty of the current method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: This paper designs an incentive mechanism for creators in online content recommendation platforms, and its reverse incentive mechanism establishes a gaming competition environment among content creators to maximize social welfare. However, this paper has the following problems:
1. the limitations of this paper are not declared;
2. whether the order of content in sections 3 and 4 can be switched, which seems to be a bit unreasonable at present;
3. in the experimental part, the data used in this paper and some experimental details are not clear descriptions. For example, synthetic data are used in this paper, but the details of the synthetic data are not shown, and for the results in Figure 1, it is not clear from the content whether synthetic data or MovieLens-1m dataset is used;
4. some references are too old and could be replaced with newer references, research from the last five years could be added appropriately, and the paper is not standardized in its citation of references;
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This paper seems to have no clear statement on the limitations of the research. I have a question as to whether the creators in the current study only considered those who create for profit, and whether they considered those who create for interest. The current study seems to consider creators who create for profit, but it does not seem to be stated in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Discussion of limitations** One limitation of our work pertains to the focus of our model, which mainly addresses the challenges encountered by those "strong platforms" which dominate content distribution (e.g., Instagram reels & TikTok). On these platforms, creators’ impressions and rewards are almost enforced by the platform’s design, rendering our game structure and modeling assumptions readily justifiable. However, it should be noted that other content platforms, such as Yelp and Amazon, operate differently, wherein creators’ impressions and rewards may originate from other sources (e.g., user searches), so our assumption does not strictly hold. Although we believe the insight revealed from our result still generalizes, it requires significantly additional effort to substantiate our claims. We will clarify the limitations in our revision.
**Q2: Switching Section 3 and 4** We appreciate the reviewer’s suggestion, but after some careful consideration, we still believe the current order is better for presentation since we need to first present the framework of our model in Section 3 before describing the class of merit-based monotone mechanisms in Section 4, where we have to use the notations and concepts introduced earlier in Section 3.
**Q3: Description of dataset** The dataset description for the experiment is provided in Section 6.1. And more details about the dataset and our simulation settings can be found in the supplementary document, Appendix 8.10.
**Q4: References** Thanks for pointing this out. We will for sure address these reference issues in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We hope that our clarifications have satisfactorily addressed the reviewer’s questions. Any further feedback or insights that could increase the reviewer’s evaluation of our work are warmly welcomed. Such insights would be invaluable in improving both the quality and impact of our study. Thank you! | Summary: The authors study strategic content creation in recommendation systems, focusing on the induced game's social welfare. The authors assume that the provider's rewards are entirely determined by the platform's payments, not clicks/engagements. This separation between the ranked results and the creators' incentives facilitates analyzing an expressive game in which the recommender system recommends a list of items (most prior work considers one item) with position bias. Notably, the authors assume that the rewarding mechanism is a mapping from a vector of relevance scores to reward vectors, i.e., $[0,1]^n \rightarrow [0,1]^n$.
The paper considers two classes of merit-based mechanisms, monotone and BRM (and also the BRCM subclass). It showcases evidence against monotone mechanisms, highlighting that in a particular class of games (TvN), the POA of any monotone mechanism is at most $\frac{K}{K+1}$, where $K$ is the length of the list, plus a small $\frac{1}{n}$ factor. Later, for BRCM mechanisms, the authors show that the welfare function is the potential function; thus, the global optimum of the welfare is a PNE (despite the POA could still be $<1$ for some mechanisms).
Finally, the authors describe how to optimize over BRCM mechanisms in the presence of data, and conduct synthetic and semi-synthetic experiments to demonstrate their approach.
Strengths: 1. The paper deals with a timely and important topic, and well-connects to previous literature.
2. The paper non-trivially extends previous literature, suggesting new theory and experimental validation.
3. The optimization problem the paper suggests is exciting and new to this literature.
Weaknesses: 1. Due to the abundance of mathematical objects and notations, the paper is non-trivial to follow. Perhaps this is inevitable, but I see this as a weakness.
2. The case against monotonicity focuses on a relatively small class of TvN games tailored to the authors' argument. Arguing against monotonicity in (more) general games would be much more convincing.
3. The empirical evaluation lacks proper benchmarks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. An assumption of the model is that impressions/clicks of users and creators' rewards are separated and orthogonal. In reality, the utility function of creators is not always "set" by the platform (as argued in 168,170), but rather determined by the interaction the platform allows. For instance, in Medium, blog traffic means more ads, regardless of relevance. Can the platform decide that a blog is irrelevant and thus, despite its heavy traffic, not share the ad revenue with the blog's creator? As written in my summary, I see this separation as the element facilitating a much more expressive game than those considered in prior work. So my question is: Can the authors justify this modeling assumption?
2. Relating to the previous point, something disturbs me in the explanation in Lines 241-250. Say the $K$ contents creators attract impressions, clicks, etc., and get high rewards. Then, the $K+1$ creators improves their content's relevance, and suddenly the $K$ selected creators suddenly get almost none. I think that this is hard to digest and probably will be deemed unfair from the creators' perspective. Could the authors justify why BRCM mechanisms benefit creators? After all, there are other unfair mechanisms that dictate the content each creator creates, which receives optimal welfare.
3. The paper advocates using BRCM mechanisms and not monotone ones. Noticeably, the intersection between these classes is not empty. But Theorem 1 suggests that the optimal BRCM mechanism that exists due to Corollary 1 is *not* monotone. This brings the question of how natural this mechanism is (the one guaranteed by Corollary 1). Do we want to sacrifice simplicity (and arguably fairness) for a proportion of $\frac{1}{K+1} of the welfare? Since advocating for BRCM is not equivalent to advocating against monotonicity (as the intersection is non-empty), I think the reader could benefit from a more elaborated discussion. Ideally, this paper would have presented evidence beyond social welfare in TvN games, but perhaps this is saved for future work.
4. The welfare function comprises the user welfare, the creators rewards and costs, and the mechanism's payments. But the mechanism's payments are precisely the creator rewards, so they cancel. Consequently, by offering unbounded rewards the platform does not change the welfare, but obviously this will never happen in real life. What part of the modeling forbids that? This seems like a loophole in the modeling. Could the authors justify this welfare modeling? Even if we care about the welfare function defined in this paper, one would expect the platform to minimize its payments for any given welfare level. Could this be injected into the model?
5. In the experimental part, the authors compare BRCM mechanisms with monotone mechanisms. However, for the monotone mechanisms, the authors do not optimize over the hyperparameter $\beta$. I believe an apples-to-apples comparison requires optimizing for both classes of mechanisms. Why didn't the authors do that in their analysis? Additionally, what would be other monotone baselines?
6. In Figure 1(a), the optimized BRCM scores better than the optimal BRCM. How could that be possible?
Minor:
• 23: “benefits\cite{}” -> add space
• Lines 139-140 say "Our objective is to design mechanisms M that: 1. guarantee the existence of PNE, thereby ensuring a stable outcome, and 2. maximize social welfare at the PNE." While Line 135 says that this is not the focus of [1,12,13], I think that [1] primarily addresses the same objectives.
• 155: the parameter $n$ seems to be out of context, as it doesn't appear in the $M^3$ notation.
• 210: ".." -> "."
• 254: $BRCM \in BRM$ -> $BRCM \subset BRM$
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Negative result being restrictive** We admit that theoretically demonstrating the limitation of monotonicity in general is challenging, and we identify it as an intriguing future work. In the meantime, our experiment result empirically suggests a constant fraction of welfare loss beyond the class of TvN games: Figure 1 (a, c) indicates that when the user population is well separated, the welfare gap induced by representatives from BRCM and $M^3$ can be approximately 20% (55+ v.s. 65+ in (a) and 50+ v.s. 60+ in (b)) when $K=5$.
**Q1: Justification of modeling assumption** Our modeling assumption mainly focuses on those "strong platforms" which dominate the content distribution (e.g., Instagram reels & TikTok). On these platforms, the correlation between creators’ impressions and rewards is almost “enforced” by the platform’s design, rendering our game structure and modeling assumptions readily justifiable. However, platforms such as Medium and OnlyFans are not precisely the content recommendation platforms we consider in our model since the creators’ impressions and rewards may originate from other sources (e.g., user searches and subscriptions). We appreciate the reviewer for bringing up this concern, and we will include this point in the discussion of limitations.
We should also clarify that for those "strong platforms", we did not assume that the impression/clicks from users and the rewards from the platform are orthogonal. In Line 102-108, we explain how these two are connected: once the platform decides the expected reward for each creator using a mechanism $M$, it can implement it by setting a traffic allocation scheme $p$ and a post allocation reward $R$. More importantly, the platform does have the flexibility to make the impressions/clicks from users and the rewards from the platform positively correlated. The easiest way to do so is to allocate traffic proportional to the output of $M$ by setting $R=1$. In this case, due to the fairness property of BRM (see Line 151), the platform can always guarantee a creator with a higher relevance score receives a larger portion of the traffic.
**Q2: Why BRM has benefits** The rationale behind the example of BRCM[1,...,1,0,...,0] is that the marginal contribution of the $K$-th creator to the user satisfaction becomes smaller when the $K+1$-th creator gets a similar score since even if the $K$-th creator lowers her content quality, the satisfaction of this user is not severally impacted as the $K+1$-th creator can fill in the $K$-th position. Therefore, BRM benefits creators by preventing the rat race among them: everyone's utility would drastically decrease as the competition over a user interest group becomes unnecessarily intense. Such signals can thus help creators to identify and reroute to underserved audiences for larger potential rewards. This bears semblance to the analogy of traffic congestion, where an individual's utility declines when their chosen route becomes congested, prompting a shift towards an alternative path with less congestion.
If the platform worries that this mechanism might be intricate for creators to digest, it can simply inform creators that the reward from a user impression is proportional to the creator’s marginal contribution to this user’s expected satisfaction, which means the creator will get a lower reward if lots of high quality but similar content already overload this user.
**Q3: Simplicity of BRCM** Thanks for pointing out that the intersection between M3 and BRCM is not empty, and we will add corresponding discussions in the revision. However, we believe BRCM does not sacrifice simplicity as it is pretty straightforward to implement in practice (though more complex to analyze in theory): the platform first estimates the user attention decaying factors $\{r_i\}$ for each user and then in each user’s interaction session, use them to calculate the reward for each creator sequentially with complexity $O(K)$.
**Q4: Justification of welfare modeling** In this work, we focused on welfare maximization by disentangling this question from other considerations, such as the platform’s revenue, budgets, and attractiveness to creators compared to other outside options. Such modeling methodology is commonly adopted in mechanism design, often for the clarity of analysis. For instance, in the well-known second-price auction, asking the winner to pay any $\alpha(\leq 1)$ fraction of the second highest bid is also a truthful welfare-maximizing mechanism but will change the seller’s revenue. Similarly, in our model, any rescaling of the reward $M$ (or, more specifically, a rescale of the set of functions $f$) does not change the creators’ equilibrium behavior either. Consequently, if the platform wants to upper bound the total reward, that can be easily done by rescaling the rewards. An alternative way to upper bound the total reward in practice could be to incorporate the total budget constraint in the optimization problem in Section 5.2. The platform may consider adding a soft constraint as a regularization term in the objective function. We will include such discussions in our revision. However, we acknowledge that theoretically achieving both total payment minimization and welfare maximization is challenging, and we believe it is more appropriate to defer it to future investigation.
**Q5: Baselines in experiments** In the experiment, the hyperparameter $\beta=0.05$ for $M^3$ baselines is already the optimized choice. We will clarify this in our revision.
**Q6: Why optimized BRCM is better than the optimal one** We provided an explanation in Line 361-366. Simply put, when the mechanism is fixed, the stochastic nature of creators’ responses might slow down the convergence to the welfare-maximizing PNE. However, during the optimization process, the mechanism undergoes dynamic changes with some randomness, leading to an exploration effect that could potentially improve the outcome.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my concerns. As noted, I believe that this work extends prior works in this research strand in a non-trivial manner. In my opinion, the paper should have further discussions on the points I've raised, assisting researchers unfamiliar with this research strand to understand the modeling assumptions, weaknesses, future challenges, etc. Based on the authors' rebuttal, I believe they are inclined to do so.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for taking the time to respond to our rebuttal. We are pleased to hear that you acknowledge the significance of our contribution in extending the existing research strand, and your feedback holds great value in guiding us towards refining our work and making it more comprehensive. We will carefully revise our work to broaden its impact among researchers who may not be acquainted with this specific research domain. | Summary: This paper studies the incentives in recommendation systems. Specifically, it studies how to design the platform's reward mechanism to steer the creators' competition towards a desirable welfare outcome. Firstly, it shows a class of mechanisms called "Merit-based Monotone Mechanisms" lead to a constant fraction loss of the welfare. To overcome this loss, it introduces Backward Rewarding Mechanisms (BRMs) and shows that the competition games resulting from BRM induce the strategic creators’ behavior dynamics to optimize any given welfare metric.
Strengths: The paper studies an interesting question in recommendation system. It shows an interesting bad effect of a wide class of mechanisms towards social welfare and then designs another mechanism to overcome the negative result. The theoretical results within the scope of the paper are complete. And there are also empirical experiments.
Weaknesses: The applied value of the model in the paper is lack of justification.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you explain the applied value of the model in this paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The applied value of the model in the paper is lack of justification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The applied value of our model** Nowadays more and more content recommendation platforms realize that designing proper incentives for creators is crucial for optimizing social welfare and maximizing their total revenue (such as YouTube and Facebook). However, most of these platforms simply employ rule-based heuristic rewarding mechanisms which are not well understood about their induced content creation dynamics.
Our model formulates the welfare optimization as a mechanism design problem, so that:
- The long-term effect of different reward mechanisms can be well understood both theoretically and in a simulated environment, thus avoiding a long and expensive feedback circle in online experiments;
- Our theoretical findings narrow the optimization space by revealing a fundamental limitation of a large class of popular rewarding mechanisms (i.e., $M^3$) and propose an optimization method to search for the optimal mechanism.
Our proposed method has the potential to be directly applied to real-world applications due to the following merits:
- It is easy to implement and straightforward to optimize toward enhancing social welfare (and therefore the revenue of the platforms).
- It is compatible with any probabilistic recommendation strategy and a variety of personalized welfare metrics.
- It has good interpretability so that the platform can easily educate creators to digest and follow the designed incentives.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the clarification. I keep the original rating.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for the response. We hope that our clarifications have satisfactorily addressed the reviewer’s questions. Any further feedback or insights that could increase the reviewer’s evaluation of our work are warmly welcomed. Such insights would be invaluable in improving both the quality and impact of our study. Thank you! | Summary: This paper considers the game played by content creators in recommendation systems, which they call the content creator competition game. This game is centrally defined by a rewarding function M, decided by the platform, which rewards content creators based on how users engage with their content. The paper focuses on how to design this rewarding function in order to maximize user welfare.
They show first that a practically-motivated class of rewarding functions, “Merit-based monotone mechanisms” (M3), lead to losses in user welfare by producing an equilibrium that caters to majority-group users, and fail to cater to minority-group users. Notably, the “necessary welfare loss” is only slightly suboptimal: such mechanisms can still capture a $K/(K+1)$ factor of the optimum, where $K$ is the parameter defining the top-K recommendation policy (though they do make the point that when $K$ is effectively 1 for users who care only about the top recommendation, this ratio can be 1/2). Also notably, they prove this result in a sub-class of creator competition games with the structure of a majority group and several minority groups, all with orthogonal interests (called TvN games - “trend versus niche”).
Next, they introduce a class of rewarding functions called “backward rewarding mechanisms”, which keeps the merit-based property of M3 but drops the monotonicity assumption, trading it for a a set of functions f1,…fn specified by the platform that can be tuned to encourage diversity by making it costly for too many creators to be producing the same kind of content. They show that for TvN games, there exists a backward rewarding mechanism that admits the optimal welfare. They run some simulated experiments with user preferences that constitute a TvN game.
Strengths: - The paper aims to be general: they take almost an axiomatic approach and study an entire class of mechanisms (M3) defined by two main assumptions, which encompass multiple practical mechanisms. They also study an entire class of instances (TvN games).
- They show clean equilibrium results for both classes of mechanisms
- The new class of mechanisms they propose is conceptually interesting — it makes clear why the monotonicity assumption causes problems, and offers a tunable class of algorithms that can help improve user welfare, at least in theory (it remains to be seen whether there is adequate information available in practice to set the parameters of these mechanisms well).
- The paper takes care to make abstract concepts understandable, giving examples and intuition to supplement the math
Weaknesses: 1. I do not understand the “monotonicity” property conceptually (described on Lines 43-44 as “the sum of creators’ utilities increase whenever any creator increases her content relevance”). I may be misunderstanding something here, but I interpret this to mean “when one creator benefits, all creators benefit on average”. This doesn’t seem to necessarily reflect an environment in which content creators are competing: Under competition, it could be the case that when a given creator improves her content, it greatly increases her own utility but decreases all other creators’ utilities more in total?
2. Although the paper makes some attempts to justify why the K/(K+1) loss of M3 mechanisms is bad, this doesn’t seem that bad to me, especially given that it emerges from a purely theoretical model in which many abstractions have been made. I felt that the paper oversold the magnitude of this loss in multiple places. It’s also not clear to me that the proposed fixed (BRM) doesn’t have similar loss in other natural sub-classes of games outside TvN games (as their positive results applies only to TvN games). To me, the combination of these two points weakened the motivation for the results sections later in the paper.
Small points about clarity:
- Line 34: what is a “reward signal”?
- Line 41: “and frame a class of prevailing rewarding mechanisms… Merit-based Monotone Mechanism”. It’s not clear what “prevailing” means here.
- In the explanation Lines 50-53, I’m missing a logical step: is “relevance quality” evaluated in terms of the *number* of users who find it relevant? Otherwise, I don’t see how this monotonicity property could cause concentration of creators around majority users’ preferences.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Questions are encompassed in "weaknesses" section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Clarification of monotonicity** The reviewer’s understanding of monotonicity is correct. In many competing environments, a unilateral improvement of one player’s utility could decrease other players’ utilities but does not necessarily decrease the total utility across all players. For example, when all creators compete for a fixed user attention pool, the total utility of creators remains constant or slightly increased (if user attention increases with more content). And monotonicity summarizes this property of such competing environments. The main message of our negative result is that such competing environments are ineffective for social welfare maximization in online content recommendation platforms because they fail to penalize oversaturation in any particular group of users.
**The significance of negative result** First, we need to clarify that $1/K$ fraction of loss is considered significant in many practical applications, especially commercial platforms, as the span of user attention $K$ is usually not large in many leading content recommendation platforms (e.g. $K\leq 10$). Given that the volume of traffic is large (e.g., billions of video impressions per day in TikTok, https://www.usesignhouse.com/blog/tiktok-stats), any small fraction of welfare increase has the potential to create a considerable amount of revenue for the platform, and our proposed mechanisms pave the way for optimizing this marginal gain. Second, our experiment result suggests a constant fraction of welfare loss holds beyond the class of TvN games: Figure 1 (a, c) indicates that when the user population is well separated (e.g., generated from distinct Gaussian clusters in our simulation), the welfare gap induced by BRCM and $M^3$ can be approximately 20% (55+ v.s. 65+ in (a) and 50+ v.s. 60+ in (b)) when $K=5$. Finally, we acknowledge that theoretically generalizing our negative result is an important yet challenging direction, and we consider it as an intriguing future work.
**Clarification of other concepts** “reward signal” refers to the monetary reward set by the platform. By saying “...prevailing mechanisms” we actually mean “...existing popular mechanisms…” The “relevance quality” means the average relevance matching quality/score over this user population but not the number of users who find it relevant. We will carefully address these notions in the revision of this paper.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I have read the authors' response, and I thank them for the clarifications. I have no further questions, and I'm satisfied that the weaknesses I raised are not a significant issue, so I'm inclined to keep my leaning-positive score.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We are glad to hear that our explanations addressed the reviewer’s concerns. We are also excited about the potential of our work in enhancing the efficiency of the online content creation and recommendation ecosystem. Hence, we would be delighted to hear any additional guidance that could increase the reviewer’s evaluation of our work. These would help to significantly enhance the quality and visibility of our research. Thank you! | Rebuttal 1:
Rebuttal: We thank all the reviewers for the overall positive and informative feedback. In the following, we respond to the questions one by one. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Compact Neural Volumetric Video Representations with Dynamic Codebooks | Accept (poster) | Summary: A method for compressing Volumetric Videos is presented in this work. It is based on NeRF with a factored multi feature plane representation. The features of the model are compressed in two stages. In the first, a codebook for features is constructed based on the average contribution a feature has to the total reconstruction. Features are weighted based on their score contribution to the NeRF integral. Ones with low score contributions are merged into a zero code, the top 30% of codes are retained, and the rest are clustered using an exponential-moving average algorithm. In the second compression stage, a small number of features per temporal segment are selected based on the accumulated gradients of back propagation on the codebook compressed model. These features are then optimized and added to the codebook, dynamically growing the codebook with time. The resulting method produces reconstruction quality and compression ratios comparable to SOTA while requiring only hours instead of days to train.
Strengths: A simple method that achieves SOTA accuracy and compression performance but is much more efficient.
Weaknesses: - Writing could be a bit clearer
- Only evaluated on two scenes
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Adding some implementation details where the method is described would help clarify thing sooner. For example, how accumulated score contributions are used to select the three categories, and how a temporal fragment is defined.
- The clustering algorithm in Sec B or supplementary could be clearer and added to the main text. ie. is this performed while training the features as in VQ-VAE, or is performed as a post-processing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors do not address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions. We address the major concerns below:
**Q1: Writing could be a bit clearer.**
**A1:** Thanks for your suggestion. We will improve our writing and include more detailed descriptions of our methods in the revised paper.
**Q2: Only evaluated on two scenes.**
**A2:**: We conducted experiments on two datasets, totaling 10 scenes (4 from the NHR dataset and 6 from the DyNeRF dataset). We have provided the results for each scene in the supplementary materials.
**Q3: Adding some implementation details where the method is described would help clarify thing sooner.**
**A3:** We appreciate your suggestion. In the revised paper, we will include some of the crucial details in the method section, making the paper easier to follow.
**Q4: The clustering algorithm in Sec B or supplementary could be clearer and added to the main text. ie. is this performed while training the features as in VQ-VAE, or is performed as a post-processing?**
**A4:** The clustering algorithm is performed as a post-processing. We will make this part clearer and add them to the main text in the revised paper. | Summary: The paper addresses the challenge of representing high-fidelity volumetric videos with low storage cost. The authors proposed a novel neural representation called the dynamic codebook, which aims to reduce the spatial and temporal redundancy of feature grids inherent to scenes due to self-similarity. It achieves this by merging similar features for model compression and compensating for potential declines in rendering quality through a set of dynamic codes. The experimental results demonstrate that the dynamic codebook method achieves state-of-the-art rendering quality while achieving higher storage efficiency.
The contributions of the paper are twofold:
1. This paper proposes the dynamic codebook representation, which effectively reduces model size by minimizing feature redundancy in space and time while maintaining rendering quality in dynamic detailed regions.
2. Experimental results show that the proposed approach achieves rendering quality comparable to state-of-the-art methods while significantly improving storage efficiency.
Strengths: The paper introduces a novel approach that use the dynamic codebook to reduce the spatial and temporal redundancy.
The paper is well-structured, with clear section headings, and visual aids such as figures are effectively utilized to illustrate key points or enhance comprehension.
Weaknesses: Experimental details and results seem incompletely reported, so I have some confusion about this work. I defer some of my issues in "Questions".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. This paper claims that “…which is of low model size but extremely slow training speed.” In Table 1, but there is no mention of the training time for the models. It would be helpful to provide insights into the time required for training the model and how it compares to other models.
2. Regarding the comparison of model size, I noticed that this paper uses “post-processing” including quantization and entropy coding techniques. According to my understanding, this techniques can affect the model size, whether other comparison models use this techniques? If not, it would be beneficial to discuss the potential outcomes if these post-processing steps were not applied.
3. In line 237, this paper uses different k for different dataset. Is there any particular consideration followed while setting the value of k?
In reference to the dynamic codebook's size, it would be useful to explore what constitutes an appropriate size. The codebook-based compression method mentioned can significantly reduce model storage but may result in a decline in rendering quality. Thus, it would be meaningful to investigate how variations in k affect the same dataset.
4. The model size is composed of the quantized dynamic codebook, index planes as well as MLP parameters with entropy encoding, so the bit allocation of different components of the model should be provided, which can help the understanding of the proposed methodology.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have partially addressed the limitations of existing work, though there is space for improvement (see the section Weaknesses and Questions).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions. We address the major concerns below:
**Q1: It would be helpful to provide insights into the time required for training the model and how it compares to other models.**
**A1:** In Section 4, we mentioned the details of training time: "We train the model on a single NVIDIA A100 GPU, which takes about 1.5 / 4.3 hours for training and 1.0 / 1.7 hours for the construction of a dynamic codebook on NHR / DyNeRF datasets." And we provide comparison to other models here.
|Method |NV |C-NeRF|D-NeRF|DyNeRF|DyMap|K-Planes|Ours|
|-------------------|---|------|------|------|-----|--------|----|
|Training time (hrs)|>20|>20 |>20 |>100 |16 |2 |2.5 |
**Q2: Analysis of post-processing and allocation of model storage.**
**A2:** To more clearly analyse the model storage in our method, we examined the allocation of model storage size at each step on the NHR dataset, in accordance with the sequence of method implementation. Given that our baseline has fewer MLP parameters, we did not further compress the MLP parameters. The primary storage of the initial model (1) lies in the feature planes. We compressed these using a representation of codebook + index planes (2), which significantly reduces storage. We then utilized quantization (3) and entropy encoding (4) techniques for post-processing, achieving further reductions. The results at this stage correspond to the "compression w/o DC" in Table 3 of our paper. Finally, we employed a dynamic codebook (5) to introduce additional storage for codebook and index planes, enhancing the rendering quality. We will further refine these experiments and analysis and incorporate them into the revised paper.
| |feature planes|codebook|index planes|MLP |Sum |
|---|--------------|--------|------------|----|------|
|1 |119.92 |0 |0 |0.10|120.02|
|2 |0 |29.04 |0.08 |0.10|29.22 |
|3 |0 |7.52 |0.08 |0.10|7.70 |
|4 |0 |5.51 |0.08 |0.10|5.69 |
|5 |0 |15.50 |0.98 |0.10|16.58 |
**Q3: Analysis of hyperparameters such as k (codebook size).**
**A3:** Our method introduces some new hyperparameters such as k. However, there are some principles to follow. Here, we outline some guidelines for setting these hyperparameters:
- **Size of the codebook ('k'):** The setting of the codebook size relates to the trade-off between storage size and rendering quality. For more content-rich scenarios like DyNeRF, a larger codebook is needed. On the other hand, for relatively simpler scenarios like NHR, a smaller codebook suffices.
- **The ratio of codes to discard or retain during codebook compression:** We retain the top 30% of codes with the highest importance score contributions in all datasets. These contribute approximately 80% of the total importance score. Setting the threshold based on this ratio allows us to substantially reduce storage while trying to maintain the original rendering quality.
We designed corresponding ablation studies on NHR, and the results are as follows:
|percent of retained code |10 |20 |30 |
|---------------------------|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.39|32.57|32.85|
|PSNR (w/ dynamic codebook) |33.40|33.46|33.51|
|final model size (MB) |16.3 |16.5 |16.6 |
|k |1024 |2048 |4096 |8192 |16384|
|---------------------------|-----|-----|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.09|32.47|32.85|32.89|33.01|
|PSNR (w/ dynamic codebook) |33.30|33.43|33.51|33.54|33.58|
|final model size (MB) |16.3 |16.4 |16.6 |17.1 |18.0 |
The results evidents that both codebook size and the retention ratio influence the storage size and rendering quality of our method. However, the impact is not substantial, indicating that our method is fairly robust to these hyperparameters.
We will design and conduct more comprehensive ablation studies, which will be incorporated into the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. All my concerns were well addressed.
---
Reply to Comment 1.1.1:
Comment: Thank the reviewer for acknowledging our rebuttal. Your feedback is very constructive, and we will revise the paper according to it. | Summary: The proposed method applies the codebook technique to explicit feature plane representation and a dynamic codebook is further proposed for dynamic scenes. Experimental results demonstrate the good performance of the proposed methods.
Strengths: 1. The proposed method significantly reduces the model size while keeping a similar render quality with K-Planes [35] from both qualitative metrics and visualization.
2. The proposed dynamic codebook is easy to follow and the problem addressed is important and interesting.
Weaknesses: 1. Considering that previous work VQRF has introduced codebook to feature grid representation for reducing the model size with an excellent compression ratio, the proposed method here that introduces codebook to feature plane representation is reasonable but not of sufficient novelty.
2. The main contribution of model size reduction comes from the codebook technique. The efficiency of the proposed dynamic codebook is somewhat limited.
3. There is no experimental comparison of the time cost. The influence of the optimization process caused by the dynamic codebook is unclear.
4. There lack some explanation and evaluation about some hyperparameters and settings. For example, In Sec 4, Line 235: "We then retain the top 30% of codes with the highest importance score contributions and cluster the remaining codes into k codes". The reason for the setting 30% and the influence of k can be provided for better demonstrating the efficiency of the proposed strategy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The proposed dynamic codebook should be applicable to other explicit representations, e.g., feature grids. The authors may explain the reason why they did not conduct related experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions. We address the major concerns below:
**Q1: Introducing codebook to feature plane representation is reasonable but not of sufficient novelty.**
**A1:** We would like to emphasize that we have two core contributions, which make our method distinct from previous works:
- **Technical contributions.** We present a carefully-designed method for volumetric video compression. We claim that directly applying compression methods for static scenes to dynamic scenes will result in considerable information loss. This is because they do not take into account the temporal variability characteristics of dynamic scenes, which is fundamentally different from static scenes. To this end, we designed a dynamic codebook compression method tailored to the characteristics of dynamic scenes. Our approach identifies areas that most require enhancement in each time fragment and incrementally supplements codes into dynamic codebook.
- **Experimental contributions.** Empirically, we discovered that simply applying the codebook compression methods from static scenes to dynamic scenes results in a noticeable decline in quality in detailed regions, such as facial areas. To overcome this problem, we implemented the proposed dynamic codebook with thoughtful method design and engineering efforts, which achieved a high compression rate on two representative and challenging dynamic scene datasets (NHR and DyNeRF) while ensuring rendering quality comparable to state-of-the-art (SOTA) methods.
We believe that the contributions mentioned above will bring new insights to this field and benefit the community.
**Q2: The effectiveness of the proposed dynamic codebook is somewhat limited.**
**A2:** The dynamic codebook can compensate for the quality loss resulting from compression while requiring less storage. We have provided corresponding qualitative and quantitative analyses in Figure 4 and Table 3. The qualitative results clearly show that compression leads to significant quality loss in detailed areas such as the face, while the dynamic codebook can improve the quality in these areas. From a quantitative perspective, the PSNR improvement brought by the dynamic codebook is not particularly large. We claim there are two reasons for this: 1. The purpose of our method is to compress while maintaining rendering quality, so the rendering quality of the baseline before compression can be considered an upper limit that we can hardly exceed. 2. Detailed areas occupy a smaller proportion of the image, while PSNR is averaged over all pixels when calculated, so the improvement in the quality of detailed areas is not very pronounced in terms of PSNR enhancement.
**Q3: The influence of the optimization process caused by the dynamic codebook is unclear.**
**A3:** In Section 4, we mentioned the details of training time: "We train the model on a single NVIDIA A100 GPU, which takes about 1.5 / 4.3 hours for training and 1.0 / 1.7 hours for the construction of a dynamic codebook on NHR / DyNeRF datasets."
**Q4: There lack some explanation and evaluation about some hyperparameters and settings.**
**A4:** Our method indeed introduces some new hyperparameters. However, there are some principles to follow. Here, we outline some guidelines for setting these hyperparameters:
- **Size of the codebook ('k'):** The setting of the codebook size relates to the trade-off between storage size and rendering quality. For more content-rich scenarios like DyNeRF, a larger codebook is needed. On the other hand, for relatively simpler scenarios like NHR, a smaller codebook suffices.
- **The ratio of codes to discard or retain during codebook compression:** We retain the top 30% of codes with the highest importance score contributions in all datasets. These contribute approximately 80% of the total importance score. Setting the threshold based on this ratio allows us to substantially reduce storage while trying to maintain the original rendering quality.
We designed corresponding ablation studies on NHR, and the results are as follows:
|percent of retained code |10 |20 |30 |
|---------------------------|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.39|32.57|32.85|
|PSNR (w/ dynamic codebook) |33.40|33.46|33.51|
|final model size (MB) |16.3 |16.5 |16.6 |
|k |1024 |2048 |4096 |8192 |16384|
|---------------------------|-----|-----|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.09|32.47|32.85|32.89|33.01|
|PSNR (w/ dynamic codebook) |33.30|33.43|33.51|33.54|33.58|
|final model size (MB) |16.3 |16.4 |16.6 |17.1 |18.0 |
The results evidents that both codebook size and the retention ratio influence the storage size and rendering quality of our method. However, the impact is not substantial, indicating that our method is fairly robust to these hyperparameters.
We will design and conduct more comprehensive ablation studies, which will be incorporated into the revised paper.
**Q5: Why not conduct experiments to other explicit representations, e.g., feature grids?**
**A5:** 4D feature grid requires too large memory, which is infeasible, and there is currently no such work. At present, there are only two types of dynamic scene reconstruction/view synthesis work based on feature grids: 1. those based on feature planes (2D grid) 2. those using a canonical 3D feature grid in combination with deformation field.
Our work focuses on compression based on the first type of method. The second type of method, is actually a technique for compressing temporal information with deformation. However, it is not capable of dealing with some complex dynamic scenes (some complex changes or movements cannot be well modeled using deformation).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Your responses clarified my main concerns about these 2 hyperparameters of the dynamic codebook. However, I am still concerned about this approach that achieves better render quality via sacrificing training time, there is only a marginal quality improvement. The article claimed to solve issues on detailed regions such as facial areas but lacked details. To better discuss this topic, more specifics on the causes, more visual examples, and quantitative analysis showing improvements are needed.
After a re-evaluation of your paper, I willl keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback. To better demonstrate our improvements in detailed areas such as the facial areas, we conduct an additional ablation study. We segment the facial area of each image on the NHR dataset using SCHP (Self Correction for Human Parsing) and evaluate the PSNR for these areas. The results are as follows:
| |Sport1|Sport2|Sport3|Basketball|Average|
|-|-|-|-|-|-|
|baseline |21.02|22.84|21.59|23.25|22.18|
|compression w/o DC|20.08|21.51|20.89|21.53|21.00|
|compression w/ DC |20.84|22.54|21.50|22.75|21.91|
The results indicate that direct compression leads to significant degradation in rendering quality, while the dynamic codebook can substantially compensate for this loss. We will conduct more comprehensive experiments and include more qualitative results in the revised paper. | Summary: The authors of this paper propose a dynamic codebook, which optimizes away codes of low importance to rendering the scene, clusters 70% of the least importance codes and optimizes the remaining codes for every time fragment. The process the authors introduce are intuitive and each step furthers compression or works towards reducing the distortion introduced in the compression process.
I have read and acknowledged the author's rebuttal and will remain with my generally positive assessment.
Strengths: The introduction of a dynamic codebook is intuitive and handles a number of problems in the codebook creation process. First keeping highly important codes, clustering low important codes to a zero token, clustering remaining codes and then using gradient aware code optimization to further increase details for specific time fragments.
The authors demonstrate near state of the art PSNR on NHR and DyNeRF datasets ( < 0.25 dB PSNR difference) with substantially (more than 14x) smaller models.
Weaknesses: As discussed by the authors, the dynamic codebook construction takes at least an hour for NHR and almost two hours for DyNeRF and substantially increases the total end to end time and rendering time.
The dynamic codebook introduces more hyperparameters to tune, what percentage of features to keep, drop, cluster and how many dynamic codewords get added. NHR and DyNeRF seem to differ substantially in these and it's unclear how much of this is based off of framerate, resolution, or complexity.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. In L169, the authors state that 99.9% of the importance is contributed by merely 10% of the voxels in the 2D case and that it is not as bad in the 3D case. Is there a citation or an empirical result the authors could share on this?
2. How large are the codebooks for each of the time fragments? In L237, the authors state that the clustered codes are set to 4096 (NHR) and 16,384 for DyNeRF, but not how many codes are being optimized by the dynamic codebook for each fragment.
3. How much experimentation did it take to discover the different hyperparameters for the dynamic codebook? Is it easy to extrapolate to other datasets? Do much more could be gained by even more optimal hyperparameters on a nerf by nerf basis?
4. Any qualitative or user study on different nerfs? The quality difference seems minor, so it's not clear if it's visually preferred over other methods.
Typo in Table 3:
basline => baseline
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: No concerns
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions. We address the major concerns below:
**Q1: The dynamic codebook construction increases the total end to end time and rendering time.**
**A1:** Firstly, the dynamic codebook has a slight influence on the rendering time as the indexing process of the codebook is nearly instantaneous. Secondly, although the training time increases by 1-2 hours, the overall training time remains comparable to other methods based on feature grids, and is at least an order of magnitude faster than methods like DyNeRF. Moreover, the benefits are significant, especially in terms of storage efficiency improvement. This presents substantial application value in scenarios where high storage efficiency is required but sensitivity to training time is not a concern.
We also provide comparison results of training and rendering time with SOTA methods, where rendering speed is tested on NHR dataset at resolution of 512*384:
|Method |NV |C-NeRF|D-NeRF|DyNeRF|DyMap|K-Planes|Ours|
|-------------------|---|------|------|------|-----|--------|----|
|Training time (hrs)|>20|>20 |>20 |>100 |16 |2 |2.5 |
|Rendering time (ms)|73 |1969 |2303 |5195 |33 |384 |61 |
**Q2: The dynamic codebook introduces more hyperparameters, and it's unclear how to adjust them.**
**A2:** Our method indeed introduces some new hyperparameters. However, there are some principles to follow. Here, we outline some guidelines for setting these hyperparameters:
- **Size of the codebook ('k'):** The setting of the codebook size relates to the trade-off between storage size and rendering quality. For more content-rich scenarios like DyNeRF, a larger codebook is needed. On the other hand, for relatively simpler scenarios like NHR, a smaller codebook suffices.
- **The ratio of codes to discard or retain during codebook compression:** We retain the top 30% of codes with the highest importance score contributions in all datasets. These contribute approximately 80% of the total importance score. Setting the threshold based on this ratio allows us to substantially reduce storage while trying to maintain the original rendering quality.
We designed corresponding ablation studies on NHR, and the results are as follows:
|percent of retained code |10 |20 |30 |
|---------------------------|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.39|32.57|32.85|
|PSNR (w/ dynamic codebook) |33.40|33.46|33.51|
|final model size (MB) |16.3 |16.5 |16.6 |
|k |1024 |2048 |4096 |8192 |16384|
|---------------------------|-----|-----|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.09|32.47|32.85|32.89|33.01|
|PSNR (w/ dynamic codebook) |33.30|33.43|33.51|33.54|33.58|
|final model size (MB) |16.3 |16.4 |16.6 |17.1 |18.0 |
The results evidents that both codebook size and the retention ratio influence the storage size and rendering quality of our method. However, the impact is not substantial, indicating that our method is fairly robust to these hyperparameters.
We will design and conduct more comprehensive ablation studies, which will be incorporated into the revised paper.
**Q3: The basis of the distribution of importance score.**
**A3:** The statement "when representing 3D scenes using feature grids, 99.9% of the importance is contributed by merely 10% of the voxels" originates from reference [16], which focused on compressing static scenes. In our paper, we claim that "the situation is not as extreme when we use 2D planes to represent volumetric videos" primarily for two reasons: 1. Volumetric videos have much richer content compared to static scenes. 2. The representation using feature planes is somewhat more compact than using a 3D feature grid. Hence, when we use feature planes to represent volumetric videos, the situation isn't as extreme as when using feature grids for static scenes. Moreover, our empirical findings support this claim--we discovered that 99.9% of the importance is contributed by approximately 75% of the voxels and 80% of the importance is contributed by approximately 30% of the voxels.
**Q4: How many codes are being optimized by the dynamic codebook for each fragment?**
**A4:** In section 4, we mentioned that "we optimize 1000 appearance codes and 5000 density codes for each time fragment."
**Q5: Any qualitative or user study on different nerfs? The quality difference seems minor, so it's not clear if it's visually preferred over other methods.**
**A5:** We provide qualitative comparisons with other methods in Figure 2 and Figure 3. The quality difference between our approach and the state-of-the-art methods is indeed minor. However, this is not conflict with our claimed contribution: we can achieve rendering quality comparable to state-of-the-art methods, but with a significant boost in storage efficiency. The improvement of storage efficiency can be clearly seen from the quantitative results in Table 1 and Table 2.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the detailed responses to my questions and concerns as this provides better clarity of your work. I'm still happy with my recommendation of Accept and believe this is a sufficiently strong technical work for NeurIPS.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. We will revise the paper according to your suggestions. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a novel approach for representing volumetric video using a dynamic codebook that incorporates the temporal correlation of features. This addresses the drawback of existing feature grid-based methods, which overlook this correlation.
Strengths: 1. The proposed method uses a multidimensional feature space and a dynamic codebook to model the changing scene.
2. The paper presents model compression techniques, including pruning and weight clustering, which reduce the size of the model.
Weaknesses: 1. The article's novelty is limited as the introduction of the dynamic codebook approach appears to be primarily focused on engineering aspects. Additionally, the results from ablation experiments suggest that this approach is not particularly effective, and although a large number of model parameters are introduced, there is only a marginal improvement in video quality.
2. It is impractical to manually adjust dynamic codebook parameters (such as k) for different videos in real-world applications.
3. The paper lacks comparison results regarding training and rendering time.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What specific improvements have been made in Sections 3.1 and 3.2 compared to existing works, like [16]?
2. Why were certain methods not tested on the DyNeRF dataset?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Further research is needed to develop effective methodologies for dynamic codebooks. Although the paper enhances the expressive capabilities of the dynamic neural representation model, there is room for future investigation into optimizing training and rendering speed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful suggestions. We address the major concerns below:
**Q1: Limited novelty and no apparent improvements compared to existing works.**
**A1:** We would like to emphasize that we have two core contributions, which make our method distinct from previous works:
- **Technical contributions.** We present a carefully-designed method for volumetric video compression. We build our method on feature plane instead of 3D feature grid, which is a more suitable representation for volumetric videos. We claim that directly applying compression methods for static scenes to dynamic scenes will result in considerable information loss. This is because they do not take into account the temporal variability characteristics of dynamic scenes, which is fundamentally different from static scenes. To this end, we designed a dynamic codebook compression method tailored to the characteristics of dynamic scenes. Our approach identifies areas that most require enhancement in each time fragment and incrementally supplements codes into dynamic codebook.
- **Experimental contributions.** Empirically, we discovered that simply applying the codebook compression methods from static scenes to dynamic scenes results in a noticeable decline in quality in detailed regions, such as facial areas. To overcome this problem, we implemented the proposed dynamic codebook with thoughtful method design and engineering efforts, which achieved a high compression rate on two representative and challenging dynamic scene datasets (NHR and DyNeRF) while ensuring rendering quality comparable to state-of-the-art (SOTA) methods.
We believe that the contributions mentioned above will bring new insights to this field and benefit the community.
**Q2: The results from ablation experiments suggest that this approach is not particularly effective.**
**A2:** The dynamic codebook can compensate for the quality loss resulting from compression while requiring less storage. We have provided corresponding qualitative and quantitative analyses in Figure 4 and Table 3. The qualitative results clearly show that compression leads to significant quality loss in detailed areas such as the facial regions, while the dynamic codebook can improve the quality in these areas. From a quantitative perspective, the PSNR improvement brought by the dynamic codebook is not particularly large. We claim there are two reasons for this: 1. The purpose of our method is to compress while maintaining rendering quality, so the rendering quality of the baseline before compression can be considered an upper limit that we can hardly exceed. 2. Detailed areas occupy a small proportion of the image, while PSNR is averaged over all pixels, so the improvement in detailed areas is not very pronounced in terms of PSNR enhancement.
**Q3: It is impractical to manually adjust the hyperparameters in real-world applications.**
**A3:** Our method indeed introduces some new hyperparameters. However, there are some principles to follow. Here, we outline some guidelines for setting these hyperparameters:
- **Size of the codebook ('k'):** The setting of the codebook size relates to the trade-off between storage size and rendering quality. For more content-rich scenarios like DyNeRF, a larger codebook is needed. On the other hand, for relatively simpler scenarios like NHR, a smaller codebook suffices.
- **The ratio of codes to discard or retain during codebook compression:** We retain the top 30% of codes with the highest importance score contributions in all datasets. These contribute approximately 80% of the total importance score. Setting the threshold based on this ratio allows us to substantially reduce storage while trying to maintain the original rendering quality.
We designed corresponding ablation studies on NHR, and the results are as follows:
|percent of retained code |10 |20 |30 |
|---------------------------|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.39|32.57|32.85|
|PSNR (w/ dynamic codebook) |33.40|33.46|33.51|
|final model size (MB) |16.3 |16.5 |16.6 |
|k |1024 |2048 |4096 |8192 |16384|
|---------------------------|-----|-----|-----|-----|-----|
|PSNR (w/o dynamic codebook)|32.09|32.47|32.85|32.89|33.01|
|PSNR (w/ dynamic codebook) |33.30|33.43|33.51|33.54|33.58|
|final model size (MB) |16.3 |16.4 |16.6 |17.1 |18.0 |
The results evidents that both codebook size and the retention ratio influence the storage size and rendering quality of our method. However, the impact is not substantial, indicating that our method is fairly robust to these hyperparameters.
We will design and conduct more comprehensive ablation studies, which will be incorporated into the revised paper.
**Q4: The paper lacks comparison results regarding training and rendering time.**
**A4:** In Section 4, we mentioned the details of training time: "We train the model on a single NVIDIA A100 GPU, which takes about 1.5 / 4.3 hours for training and 1.0 / 1.7 hours for the construction of a dynamic codebook on NHR / DyNeRF datasets." The rendering time depends on factors such as the complexity of the scene and the resolution of the image. We tested the rendering speed of our method and other methods on NHR dataset at resolution of 512*384. The full comparison results are as follows:
|Method |NV |C-NeRF|D-NeRF|DyNeRF|DyMap|K-Planes|Ours|
|-------------------|---|------|------|------|-----|--------|----|
|Training time (hrs)|>20|>20 |>20 |>100 |16 |2 |2.5 |
|Rendering time (ms)|73 |1969 |2303 |5195 |33 |384 |61 |
**Q5: Why were certain methods not tested on the DyNeRF dataset?**
**A5:** Based on the results reported in DyNeRF, the performance of Neural Volumes is quite poor. C-NeRF, as a follow-up to Neural Volumes, is also expected to perform poorly, and they both require substantial storage. As for DyMAP, we ran it ourselves and found that it does not work on the DyNeRF dataset.
---
Rebuttal Comment 1.1:
Title: response
Comment: Thank you for the author's response. The author has provided an implicit compression method with a dynamic codebook. However, from the implementation results, the method may be not convincing:
The baseline model used for comparison (such as DyNeRF) is uncompressed, and previous compression works (such as VQRF) have already provided efficient codebook representations and high compression ratios.
Therefore, even without introducing a dynamic codebook, it is possible to achieve better performance than the baseline model. This can be inferred from the results of Ablation studies (Table 3 compression w/o DC).
Furthermore, Ablation studies are confusing. Using DC slightly increases PSNR but significantly increases model size. This result does not demonstrate the superiority of the compression technique, even though it still reduces model size compared to the baseline.
Moreover, in terms of technical details, manually setting hyperparameters (k) is impractical and unfeasible in real-world applications. The author's response in rebuttal did not provide a clear explanation for this issue.
---
Reply to Comment 1.1.1:
Comment: Thank the reviewer for the response. We would like to clarify the validity and practicality of our method as following:
1. **Comparison with baseline models**.
- **DyNeRF**. The model of DyNeRF is an MLP, which is challenging to further compress. The primary issue of DyNeRF is its slow training and rendering speed.
- **K-Planes**. We have compressed the important baseline, K-Planes, for comparison, which is referred to as `compression w/o DC` in our ablation studies.
2. **Performance of VQRF**. Using VQRF for dynamic scene compression results in quality loss in detailed areas, with a PSNR decrease of about 1 point (please refer to the table below for more details).
3. **Effectiveness of DC**. DC can improve the PSNR by approximately 0.91 in detailed areas. As mentioned in the paper, DC primarily enhances the rendering quality in detailed areas such as facial areas. Since these areas have a small pixel count proportionally, they don't significantly impact the overall PSNR. However, they greatly affect the rendering quality, as shown in Figure 4 of the paper. To better illustrate this, we conducted a quantitative ablation study on the facial areas. Specifically, we segment the facial area of each image on the NHR dataset using SCHP (Self Correction for Human Parsing) and evaluate the PSNR for these areas. The results are as follows:
| |Sport1|Sport2|Sport3|Basketball|Average|
|-|-|-|-|-|-|
|baseline |21.02|22.84|21.59|23.25|22.18|
|compression w/o DC|20.08|21.51|20.89|21.53|21.00|
|compression w/ DC |20.84|22.54|21.50|22.75|21.91|
4. **Hyperparameters**. Through our ablation study on hyperparameters, we demonstrated that the impact of hyperparameters on our method is low. Furthermore, as mentioned in our rebuttal, there are some clear principles for setting the hyperparameters. Therefore, we believe our method is practical for real-world applications. We will further revise the principles and the ablation analysis and incorporate them into paper. | null | null | null | null | null | null |
Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs | Accept (poster) | Summary: The authors identify two major issues of text-to-motion generation as overemphasis on action names and the coarseness of sentence-level representations.
To this end, hierarchical semantic graphs are adopted to factorize coarse sentences into fine-grained action concepts, thus refining the generated motion from coarse-to-fine.
Furthermore, the hierarchical design allows flexible control of the generation procedure.
Extensive experiments show the efficacy of the proposed method.
Strengths: The proposed three tiers of node as motions, actions and specifics are interesting.
The hierarchical diffusion is well-designed in disentangling different level of granularity. Also, the experimental results show reasonable improvements with more and levels.
The performance is impressive.
The motion refinement application is attractive and promising.
The analysis on distribution of diffusion steps might be interesting in identifying the performance bottleneck among different semantic levels.
Extensive ablation experiments provide a nice view of the effectiveness of different components.
Weaknesses: Qualitative comparison of different hierarchies is missed. It would be helpful to visualize motion generated from different level of latent embeddings, since fundamentally human is the only gold standard to evaluate motion generation, given the rather close numeric metrics of different levels.
The ablation on the design of the semantic graph is missed.
A more detailed description of the motion refining procedure, like a figure, an algorithm, or some equations might be added for better clarity.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Is it possible to generate with Action level or Specific level only? It would be interesting to see the corresponding results, given there is a trend that the more diffusion steps for Specific level brings better performance.
The hierarchical nature of the method might be suitable for long sequence generation. Comparison on this would be an interesting thing to do.
For motion refinement, is it possible to modify the graph beyond changing edge weights, i.e., adding/deleting nodes? Or tuning some nodes' weight to zero. What would happen?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The author described some limitations of the methods, while it would be more helpful to discuss whether the current top-down paradigm could be extended in a bottom-up manner.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the insightful comments, and for noting that our method is "interesting". We address the questions as below.
**Q1**: Is it possible to generate with action level or specific level only?
**A1**: As suggested, we train networks to generate only the action level and specific level. As shown in the following table, the hierarchical diffusion model performs better on the FID metric with the same number of diffusion steps. We consider that it is because the hierarchical generation method can generate more detail than the single-stage generation method.
| Methods | Total steps | R-Precision Top 1/2/3 | FID | MM-Dist | Diversity |
| ------------------- | :---------: | :-----------------------: | :-------: | :-------: | :-------: |
| GraphMotion | 50 | 0.496/0.686/**0.778** | **0.118** | 3.143 | **9.792** |
| Action level only | 50 | **0.507**/**0.692**/0.763 | 0.190 | 3.133 | 9.634 |
| Specific level only | 50 | 0.492/0.686/0.772 | 0.144 | **3.090** | 9.710 |
**Q2**: The hierarchical nature of the method might be suitable for long sequence generation.
**A2**: Thanks for your insightful advice. We select motions with sequence frame lengths greater than 150 in the HumanML3D test set to evaluate the long sequence generation ability of the models. As shown in the following table, our method performs better on the FID metric than the baseline methods in generating long sequences. The results show that our method is significantly suitable for long sequence generation.
| Methods | R-Precision Top 1/2/3 | FID | MM-Dist | Diversity | MModality |
| ------------------ | :---------------------------: | :-------: | :-------: | :-------: | :-------: |
| MDM (ICLR 2023) | 0.295/0.467/0.572 | 0.594 | 5.497 | 8.994 | **2.874** |
| MLD (CVPR 2023) | 0.420/0.607/0.709 | 0.824 | 3.532 | **9.429** | 2.700 |
| GraphMotion (Ours) | **0.442**/**0.644**/**0.744** | **0.260** | **3.411** | 9.167 | 2.702 |
**Q3**: For motion refinement, is it possible to modify the graph beyond changing edge weights, i.e., adding/deleting nodes? Or tuning some nodes' weight to zero. What would happen?
**A3**: Yes, we also perform the following operations on the hierarchical semantic graphs: **(1)** masking the node by replacing it with the MASK token; **(2)** modifying the node; **(3)** deleting nodes; **(4)** adding a new node. Please see Figure III in our global response for the details. The qualitative results demonstrate that our approach provides a novel method of refining generated motions.
**Q4**: Qualitative comparison of different hierarchies is missed.
**A4**: Thanks for your valuable suggestion. We provide different levels of qualitative comparison in the global response (Figure II). The results show that the output at the higher level (e.g., specific level) has more action details. Specifically, the motion level generates only coarse-grained overall motion. The action level generates local actions better than the motion level but lacks action specifics. The specific level generates more action specifics than the action level.
**Q5**: The ablation on the design of the semantic graph is missed.
**A5**: Thanks for your advice. We also tried another semantic graph based on the Stanford Scene Graph Parser. Unlike the proposed GraphMotion extracts three types (motions, actions, and specifics) of nodes, the Stanford Scene Graph Parser extracts triplet relationships (e.g., "woman"->"in"->"room"). We construct the semantic graph using the overall sentence as the first semantic level, the relation (e.g., "in") as the second semantic level, and the entities (e.g., "woman" and "room") as the third semantic level.
As shown in the following table, the method based on Stanford Scene Graph Parser is not suitable for motion generation. We consider that it is because the triplet relation emphasizes the inter-entity relationship rather than actions, and the proposed GraphMotion builds the semantic graph with actions as the core, so GraphMotion is more suitable for motion generation.
| Methods | Total steps | R-Precision Top 1/2/3 | FID | MM-Dist | Diversity |
| ---------------------------------------- | :---------: | :---------------------------: | :-------: | :-------: | :-------: |
| Stanford Scene Graph Parser based method | 150 | 0.497/0.683/0.781 | 0.212 | 3.105 | **9.789** |
| GraphMotion | 150 | **0.504**/**0.699**/**0.785** | **0.116** | **3.070** | 9.692 |
**Q6**: A more detailed description of the motion refining procedure, like a figure, an algorithm, or some equations might be added for better clarity.
**A6**: Thanks for your suggestion. We have added detailed descriptions of the motion refining procedure in the revision. Specifically, for modifying the edge, we modify the attention coefficient $e$ as $e'=\gamma e+\delta$, where $\gamma$ and $ \delta$ are adjustable parameters. For modifying the node, we first extract text embedding through CLIP, and then replace the original node with this embedding or add it to the graph as a new node. Then the graph network updates all nodes. Finally, we use the updated nodes to generate new motions.
**Q7**: It would be more helpful to discuss whether the current top-down paradigm could be extended in a bottom-up manner.
**A7**: It is an interesting topic. A possible idea is the retrieval enhancement method, which generates motions based on action details retrieved from graph nodes.
We sincerely thank you for your helpful comments. We will add the above important discussions in the final manuscript and highlight them. Thanks again for spending a huge amount of time on our paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer
Would it be possible for us to kindly ascertain if the provided responses have satisfactorily tackled any concerns you may have had and if further explanations or clarifications are needed? Your generous investment of time and effort in the evaluation of our work is truly commendable. We extend our heartfelt gratitude for your insightful commentary and the considerable time you have devoted to reviewing our paper.
---
Rebuttal Comment 1.2:
Comment: I appreciate the responses given by the authors. The newly conducted experiments have addressed my concerns. I would like to keep my positive rating.
---
Reply to Comment 1.2.1:
Title: Sincere appreciation
Comment: We sincerely thank you for your prompt and insightful review of our paper. Your comment is immensely appreciated and undoubtedly helps improve the quality of our work. We will add the above important discussions in the final manuscript and highlight them. Thanks again for taking the time and effort when handling our paper. | Summary: This work decomposes the motion description into three levels including motion, action and specifics, and proposes hierarchical semantic graphs to achieve fine-grained control of motion generation.
Experiments with the proposed method on HumanML3D and KIT datasets demonstrate better motion generation and more sensitive to subtle differences in texts than existing techniques.
Strengths: The motivation is clear, and the proposed solutions are to the raised issues.
The results seem to be impressive and the experimental setup is somewhat reasonable.
The ability to continuously refine the generated motion is meaningful and helpful to the community.
Weaknesses: 1. The paper states several times that Transformer may overemphasize action names, but I think that the action level in the hierarchical semantic graph is also another kind of emphasis. Therefore, if possible, I hope the authors can experimentally demonstrate the "overemphasis" of the Transformer and prove that the hierarchical semantic graph does not have this problem.
2. Why is the last one in the action level of Figure 1 resumes instead of walking? How does it handle "squats to picks" and distinguish "walks and walking"? Also, the hierarchical semantic graph is built on a valid semantic parsing tool, and I'm not sure whether the tool will greatly affect the overall robustness of the model.
3. What are the results in the first row of Tables 3 and 4 in what configuration? By my understanding, these two rows should be the complete ablation of their respective parts, so they should be numerically different.
4. Is the modification in Figure 5 implemented in other latest works as well? Does it only modify the weights of the edges and can it modify the nodes?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors fully explain the social impact of the work in the appendix, but only elaborate on the shortcomings of the used models (e.g., diffusion model and VAE) in terms of limitations, and do not well explain their own limitations and failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for providing constructive feedback, and for noting that "the ability to continuously refine the generated motion is meaningful and helpful to the community." We address the questions below.
**Q1**: Experimentally demonstrating the "overemphasis" of the Transformer and proving that the hierarchical semantic graph does not have this problem.
**A1**: To demonstrate the "overemphasis" of the Transformer and prove that the hierarchical semantic graph does not have this problem, we mask the verbs and action names in the motion description to force the model to generate motion only from action specifics. For example, given the motion description "a person walks several steps forward in a straight line.", we would mask "walks".
Transformer extracts text features automatically and implicitly. However, **it may encourage the model to take shortcuts, such as overemphasizing the action name "walks" at the expense of other important properties**. Therefore, when the verbs and action names are masked, the other models, which direct use the transformer to extract text features, fail to generate motion well.
By contrast, the hierarchical semantic graph **explicitly extracts the action specifics**. The explicit factorization of the language embedding space facilitates a comprehensive understanding of motion description. It allows the model to **infer from action specifics such as "several steps forward" and "in a straight line" that the overall motion is "walking forward"**.
As shown in the following table, our method can synthesize motion by relying only on action specifics, while other methods, which direct use the transformer, fail to generate motion well. These results indicate that the hierarchical semantic graph avoids the imbalance problem.
| Methods | FID | MM-Dist | Diversity | MModality |
| ------------------ | :-------: | :-------: | :-------: | :-------: |
| MDM (ICLR 2023) | 5.622 | 7.163 | 8.713 | 3.578 |
| MLD (CVPR 2023) | 3.492 | 5.632 | 8.874 | 3.596 |
| GraphMotion (Ours) | **1.826** | **5.530** | **9.284** | **3.699** |
Besides, we also provide qualitative comparison in the global response (Figure I). When the verbs and action names are masked, the existing models, which direct use the transformer to extract text features, tend to generate motion randomly. In contrast, the generated results of our method better match the descriptions.
**Q2**: Why is the last one in the action level of Figure 1 resumes instead of walking?
**A2**: The last part of the original video is walking. But for ease of visualization, we sample 8 frames from the video and merge them into one figure. The walking part is not sampled and therefore fails to be shown. We are sorry for any confusion caused by this figure. In the revision, we have sampled more frames from the video to eliminate this confusion.
**Q3**: How does it handle "squats to picks" and distinguish "walks and walking"?
**A3**: In "squats to picks", both "squats" and "picks" are considered verbs and are therefore extracted as local actions. The time relationship between "squats" and "picks" is provided by the overall sentence (i.e., the global motion node). For "walks and walking", "walks" is considered a verb. For the semantic role parsing toolkit we use, "walking" might be considered action specific. For example, for "resumes walking", "walking" is taken as the action specific of the local action "resumes".
**Q4**: The hierarchical semantic graph is built on a valid semantic parsing tool, and I'm not sure whether the tool will greatly affect the overall robustness of the model.
**A4**: We explain the robustness of our method from two aspects. **First**, referring to the experiment in **A1**, our method can synthesize motion even after masking all verbs, which indicates our robustness. **Second**, semantic parsing tools are well-developed to handle simple texts in motion generation. Therefore, the robustness of our method is not a particular concern.
**Q5**: What are the results in the first row of Tables 3 and 4 in what configuration?
**A5**: Both the first row of Tables 3 and 4 represent the result of motion generation based on the vanilla Transformer and single-stage diffusion models. For comparison purposes, we present this baseline result in both Tables 3 and 4. We have added detailed descriptions in the revision to guide the readers.
**Q6**: Is the modification in Figure 5 implemented in other latest works as well?
**A6**: No, no other method can achieve this modification. To the best of our knowledge, we are the first to propose hierarchical semantic graphs, a fine-grained control signal, for text-to-motion generation. Besides, we are the first to propose modifying the edge weights of hierarchical semantic graphs to refine the generated results.
**Q7**: Does it only modify the weights of the edges and can it modify the nodes?
**A7**: It can modify the nodes. We provide additional qualitative analysis of refining motion results in the global response (Figure III). Specifically, we perform the following operations on the hierarchical semantic graphs: **(1)** masking the node by replacing it with the MASK token; **(2)** modifying the node; **(3)** deleting nodes; **(4)** adding a new node. The qualitative results demonstrate that our approach provides a novel method of refining generated motions.
We sincerely thank you for your insightful comments. We will add the above important discussions in the final manuscript and highlight them. Thanks again for your great effort in improving our paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer
Could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Your dedication to evaluating our work is deeply valued, and we are sincerely grateful for your perceptive comments and the substantial time you have dedicated to reviewing our paper.
---
Rebuttal Comment 1.2:
Title: Some concerns that have not been addressed
Comment: Thank you for taking the time to comment. I am satisfied with some of the answers provided. However, there are still some concerns that have not been addressed: (1) Why are the first rows of Table 3 and Table 4 numerically the same? I think that they are ablation experiments done from different two aspects, so their baselines should not be the same. (2) Regarding limitations, the supplementary material only elaborates on the shortcomings of the used models (e.g., diffusion model and VAE) in terms of limitations, and does not well explain their own limitations and failure cases.
---
Reply to Comment 1.2.1:
Title: Further explanations and clarifications (1/2)
Comment: We sincerely thank you for your careful evaluation of our paper and for further discussion with us. We will provide further explanations and clarifications on the following questions.
**Q1**: Why are the first rows of Table 3 and Table 4 numerically the same?
**A1**: We apologize for any confusion caused by Tables 3 and 4. For further clarification, we provide the description of each row in Tables 3 and 4 in the following two tables. In fact, both the first row of Tables 3 and 4 represent the result of motion generation based on the vanilla Transformer and the single-stage diffusion model. We reiterate this result in both Tables 3 and 4 for the following reasons:
* In Table 3, the first row acts as a baseline to measure the effectiveness of different components.
* In Table 4, the first row is used to compare results at different levels to demonstrate that the intermediate result of our model is also better than the baseline. Especially because the structure of the motion level is very similar to the baseline, comparing the baseline results with the motion level results can further demonstrate the importance of semantic graph reasoning.
We extend our sincere apologies for the oversight. To prevent any potential misunderstandings, we have incorporated comprehensive descriptions in the revised version. Furthermore, should you recommend the removal of the first row in Table 4, we are also open to its deletion to enhance the readability of the paper.
| Rows in Table 3 | Description |
| :----------------------------------------------------------- | :----------------------------------------------------------- |
| **Row 1 (Baseline)** | **Vanilla Transformer & Single-stage diffusion model** |
| Row 2 (+ Semantic Graph) | All levels of the Hierarchical Semantic Graph & Single-stage diffusion model |
| Row 3 (+ Semantic Graph & Graph Reasoning) | All levels of the Hierarchical Semantic Graph & Graph Reasoning & Single-stage diffusion model |
| Row 4 (+ Semantic Graph & Coarse-to-Fine Diffusion) | All levels of the Hierarchical Semantic Graph & Three-stage diffusion model |
| Row 5 (+ Semantic Graph & Graph Reasoning & Coarse-to-Fine Diffusion) | All levels of the Hierarchical Semantic Graph & Graph Reasoning & Three-stage diffusion model (i.e., **the proposed GraphMotion**) |
| Rows in Table 4 | Description |
| :--------------------------------------------------- | :----------------------------------------------------------- |
| **Row 1 (Baseline)** | **Vanilla Transformer & Single-stage diffusion model** |
| Row 2 (Motion level) | The first level of the Hierarchical Semantic Graph (with Graph Reasoning) & Single-stage diffusion model |
| Row 3 (Motion level & Action level) | The first two levels of the Hierarchical Semantic Graph (with Graph Reasoning) & Two-stage diffusion model |
| Row 4 (Motion level & Action level & Specific level) | All levels of the Hierarchical Semantic Graph (with Graph Reasoning) & Three-stage diffusion model (i.e., **the default setting for the proposed GraphMotion**) |
---
Reply to Comment 1.2.2:
Title: Further explanations and clarifications (2/2)
Comment: **Q2**: The supplementary material only elaborates on the shortcomings of the used models (e.g., diffusion model and VAE) in terms of limitations, and does not well explain their own limitations and failure cases.
**A2**: Thanks for your valuable advice. We have conducted a more extensive analysis of the limitations of our method, as follows:
* Our method can generate arbitrary length results but still under the max-length in the dataset. Modeling a continuous human motion with temporal consistency presents an intriguing aspect.
* The current pipeline is limited to a singular form of motion representation. A more versatile pipeline that can seamlessly adapt to multiple datasets simultaneously would offer greater flexibility across diverse scenarios.
* Since our method performs a diffusion process on the motion latent space rather than on the raw motion sequences, it is more suitable for high-level motion editing, such as style transfer, than for low-level motion editing, such as modifying the position of only one joint. Exploring low-level motion editing within latent space holds great promise and poses an exciting avenue for future research.
* Our method inherits the randomness of diffusion models. While this characteristic contributes to the enhancement of diversity, it is important to acknowledge that it can occasionally lead to outcomes that are less desirable.
* The human motion synthesis capabilities of our method are limited by the performance of the pre-trained motion variational autoencoders. Furthermore, delving into the realm of a more efficient motion latent space holds significant promise as a compelling avenue for future research.
* Though our method brings negligible extra cost to computations, it is still limited by the slow inference speed of existing diffusion models. However, with the development of diffusion models, we anticipate a progressive mitigation of this limitation.
We have added the above important discussions in the final manuscript. In addition, we will include some failure cases in the revision. There are two main reasons for these failure cases:
* The randomness of diffusion models occasionally leads to outcomes that are less desirable, particularly noticeable in instances where longer sequences are generated, exacerbating the impact of randomness.
* Due to its inadequate grasp of low-frequency words, the model struggles to generate motion that accurately aligns with the provided description containing many low-frequency words.
We sincerely thank you for your valuable feedback. We will add the above important discussions in the final manuscript and highlight them. The dedication of time and effort you have devoted to a comprehensive review of our paper is genuinely appreciated. | Summary: This paper proposes a coarse-to-fine diffusion model coupled with a hierarchical semantic graphs to address the text-to-motion generation problem. To preserve the fine-grained control signals from captions, three-level textual features are extracted through GAT. Then, three diffusion models are adopted to recover the latent motion representations, which is decoded into motion sequences via VAE.
Strengths: 1. Semantic graphs are introduced to model the coarse-to-fine textual descriptions.
2. Hierarchical diffusion model is technically sound to learn the latent motion distributions.
3. The experimental results outperforms sota methods.
4. Fine-grained controllability of the proposed model is interesting.
Weaknesses: 1. The motivation behind the imbalance problem needs to be more convincing. In line 31-33, deficience of the existing method boils down to imbalance and coarseness. I am curious about if there are some examples could prove that imbalance problem exists in the other methods.
2. Many descriptions in this paper are misleading or incorrect, and it needs further explaination. In line 176, the term ``codebook size'' is ambiguous because it is generallly associated with VQVAE. In line 180-181, the purpose of diffusion process should be learning a mapping from gaussian noise sampled from $\mathcal{N}(0,1)$ to motion latent representation $z^m$, and the condition is the global motion node $V^m$. In line 183, $\beta_t$ is not step size, it is the noise schedule.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Q1: Please present more examples to prove that imbalance problem exists in the other methods and how your method addresses it.
Q2: According to Table D and E in your supplementary material. R-Precision of your proposed VAE models outperforms Real motion, does this mean that R-Precision is not suitable for evaluating these two motion datasets?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have discussed the potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for taking the time and effort when reading our paper and providing constructive comments. We address the questions below.
 
**Q1**: Please present more examples to prove that the imbalance problem exists in the other methods and how your method addresses it.
**A1**: We will prove the imbalance problem of other methods from quantitative and qualitative perspectives respectively, and how our method addresses it.
* From the quantitative perspective, we mask the verbs and action names in the motion description to force the model to generate motion only from action specifics. For example, given the motion description "a person walks several steps forward in a straight line.", we would mask "walks".
Transformer extracts text features automatically and implicitly. However, **it may encourage the model to take shortcuts, such as overemphasizing the action name "walks" at the expense of other important properties**. Therefore, when the verbs and action names are masked, the other models, which direct use the transformer to extract text features, fail to generate motion well.
By contrast, our method explicitly extracts the action specifics. The explicit factorization of the language embedding space discourages the model from taking shortcuts. When the verbs and action names are masked, our method **can infer from action specifics such as "several steps forward" and "in a straight line" that the overall motion is "walking forward"**.
As shown in the following table, our method can synthesize motion by relying only on action specifics, while other methods fail to generate motion well. These results indicate that our method avoids the imbalance problem of other methods.
| Methods | FID | MM-Dist | Diversity | MModality |
| ------------------ | :-------: | :-------: | :-------: | :-------: |
| MDM (ICLR 2023) | 5.622 | 7.163 | 8.713 | 3.578 |
| MLD (CVPR 2023) | 3.492 | 5.632 | 8.874 | 3.596 |
| GraphMotion (Ours) | **1.826** | **5.530** | **9.284** | **3.699** |
* From the qualitative perspective, we provide qualitative comparison of the imbalance problem in the global response (Figure I). As shown in Figure I in the attached PDF, when the verbs and action names are masked, existing models tend to generate motion randomly. In contrast, our method can generate motion that matches the description based solely on the action specifics. These results show that our method is not overly focused on the verbs and action names.
 
**Q2**: According to Table D and E in your supplementary material. R-Precision of your proposed VAE models outperforms Real motion, does this mean that R-Precision is not suitable for evaluating these two motion datasets?
**A2**: Since the existing methods are already very close to real motion in the R-Precision, we also believe that the R-Precision is not a good metric to evaluate these two motion datasets. For the motion generation task, it may be necessary to integrate a variety of metrics to evaluate the model performance.
 
**Q3**: Many descriptions in this paper are misleading or incorrect, and it needs further explanation.
**A3**: Thank you for your helpful comments. We have corrected these descriptions in the revision. Specifically, we have made the following corrections:
* We have amended "codebook size" to "token size" in line 176.
* We have rewritten lines 180-181 to "our goal is to learn the diffusion process from Gaussian noise sampled from $\mathcal{N}(0,1)$ to motion latent representation $z^m$, conditioned on the global motion node $\mathcal{V}^{m}$."
* We have explained ${\beta}_t$ in line 183 by "${\beta}_t$ is the noise schedule."
Besides, we have proofread our paper carefully in revision.
 
We sincerely thank you for your valuable comments. We will add the above important discussions in the final manuscript and highlight them. If you have further questions, please feel free to contact us. Thanks again for taking the time and effort on our paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer
Could we kindly inquire if the responses have satisfactorily tackled your concerns, or if there is a need for further clarification? Your commitment to reviewing our work is immensely appreciated, and we express our sincere gratitude for your insightful comments and the considerable time you have dedicated to reviewing our paper. | Summary: This paper presents a novel motion generation pipeline that utilizes a 3-level hierarchical semantic graph. The entire reverse process of the motion diffusion model is divided into three stages: overall motion, local actions, and action specifics. The semantic graph is extracted through semantic role parsing and further enhanced using a Graph Attention Network. The obtained node features are subsequently input into a single transformer based on their semantic level.
Strengths: 1. The proposed GraphMotion approach demonstrates excellent performance in terms of numerical metrics on two datasets, showing significant improvements compared to existing methods. Particularly noteworthy is the detailed comparison of GraphMotion at different diffusion steps with other state-of-the-art methods, which adds more persuasive evidence to the results.
2. The motion refinement aspect is intriguing, as users are able to achieve a certain degree of motion editing by modifying the content of nodes. This adds an interesting and interactive element to the method.
3. The motivation and objectives of the entire approach are well-explained, making it easy to understand the rationale behind the proposed method.
Weaknesses: 1. It is recommended that the authors conduct user studies to quantitatively compare their method with other existing works based on qualitative results. The current demo video lacks sufficient comparisons, and additional comparisons should be included to provide a comprehensive evaluation. Furthermore, it would be beneficial to showcase the qualitative results of motion refinement in the demo video to provide a more complete demonstration of the method's capabilities.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why is the video length of MotionDiffuse shorter than other methods in the demo video?
2. Why is there a significant decrease in the FID metric on KIT-ML when the number of steps increases from 50 to 150?
3. Can GraphMotion perform low-level motion editing or motion composition similar to MotionDiffuse and MDM?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations have been well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments, and for noting that our method is "novel", "intriguing" and "well-explained". We address the questions as below.
**Q1**: Why is the video length of MotionDiffuse shorter than other methods in the demo video?
**A1**: This is because we refer to the settings provided by the repository of MotionDiffuse. The repository of MotionDiffuse recommends setting the generated motion sequence length to 60 frames, so we follow this setting when visualizing, which makes the generated video short.
**Q2**: Why is there a significant decrease in the FID metric on KIT-ML when the number of steps increases from 50 to 150?
**A2**: We explain this phenomenon from two perspectives:
* Due to the relatively small scale of the KIT-ML dataset, motion sequences can be generated well when the number of diffusion steps is 50.
* Since we use DDIM instead of DDPM, increasing the number of diffusion steps does not introduce additional randomness and may reduce the diversity of the generated results. We find that when the number of DDIM diffusion steps increase, even though the fluency of generated motions increases, the diversity of generated motions reduces, leading to a decrease in the FID metric.
Therefore, to balance generation quality and generation diversity, setting the number of steps to 50 on the KIT-ML dataset is a better choice than setting the number of steps to 150.
**Q3**: Can GraphMotion perform low-level motion editing or motion composition similar to MotionDiffuse and MDM?
**A3**: GraphMotion can perform in-betweening motion editing. Unlike MotionDiffuse and MDM, which implement motion composition during the reverse diffusion process, GraphMotion can use the motion decoder $\mathcal{D}$ to fill new motion frames from the motion latent space and fixed motion frames.
In addition, since GraphMotion performs a diffusion process on the motion latent space rather than on the raw motion sequences, GraphMotion is more suitable for high-level motion editing, such as style transfer. Specifically, the input motion is first encoded into the motion latent space by the motion encoder $\mathcal{E}$. Then we can use the energy-guided diffusion process [1] to make overall motion editing according to the text.
[1] Yu, Jiwen, et al. "Freedom: Training-free energy-guided conditional diffusion model." *arXiv preprint arXiv:2303.09833* (2023).
**Q4**: It is recommended that the authors conduct user studies for quantitative comparison.
**A4**: Thanks for your valuable suggestion. We randomly selected 39 motion descriptions for the user study. The results are shown in the following table. Each row represents the preference rate of GraphMotion over the compared model. GraphMotion is preferred over the other models most of the time.
| **Methods** | Preference Rate of GraphMotion|
| ----------------------------- | :-------------: |
| GraphMotion vs. MotionDiffuse | 64.10% |
| GraphMotion vs. MLD | 56.41% |
| GraphMotion vs. Ground Truth | 48.72% |
**Q5**: It would be beneficial to showcase the qualitative results of motion refinement in the demo video to provide a more complete demonstration of the method's capabilities.
**A5**: Thanks for your advice. We will showcase the qualitative results of motion refinement in the demo video, such as modifying the edge. In addition, we will also showcase the qualitative results of new operations in the global response (Figure III), i.e., **(1)** masking the node by replacing it with the MASK token; **(2)** modifying the node; **(3)** deleting nodes; **(4)** adding a new node.
We sincerely thank you for your constructive comments. We will add the above important discussions in the final manuscript and highlight them. Thanks again for taking the time and effort on our paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer
May we kindly inquire if the provided responses have adequately addressed any questions you might have had? If there remains a requirement for further explanations or clarifications? We wish to express our sincere gratitude for your meticulous evaluation and for generously investing a significant amount of your time in reviewing our paper. Your feedback would be greatly valued.
---
Rebuttal 2:
Title: Post-rebuttal Comment
Comment: The rebuttal addressed most of my concerns and I will keep the postive view on the paper.
---
Rebuttal Comment 2.1:
Comment: Thank you for your invaluable feedback. Your expertise and attention to detail have been invaluable in guiding us toward a clearer presentation. The dedication of time and effort you have devoted to a comprehensive review of our paper is genuinely appreciated.
Title: Sincere appreciation | Rebuttal 1:
Rebuttal: # Global Response
We sincerely thank all PCs, SACs, ACs, and Reviewers for their time and efforts when handling our paper.
All reviewers appreciate the contributions of our method:
* Both Reviewers DRsM and Xx1r point out that "**the motivation is clear**, and **the proposed solutions are to the raised issues**."
* Reviewers DRsM, Xx1r, and E76p all comment that "**the experimental setup is somewhat reasonable**" and "**the detailed comparison** of GraphMotion at different diffusion steps with other state-of-the-art methods, which adds **more persuasive evidence to the results**."
* Reviewers DRsM, xtyr, Xx1r, and E76p all mention that "the proposed GraphMotion approach demonstrates **excellent performance** in terms of numerical metrics on two datasets, showing **significant improvements** compared to existing methods."
* Reviewers DRsM, xtyr, Xx1r, and E76p all point out that the motion refinement aspect is "**intriguing**", and "**meaningful and helpful to the community**".
As suggested by the reviewers, we provide the following in the attached PDF:
* **Figure I: Qualitative comparison between our method and other methods on the imbalance problem.** To demonstrate the imbalance problem of other methods and prove that our method does not have this problem, we mask the verbs and action names in the motion description to force the model to generate motion only from action specifics. When the verbs and action names are masked, existing models tend to generate motion randomly. In contrast, our method can generate motion that matches the description based solely on the action specifics. These results show that our method is not overly focused on the verbs and action names.
* **Figure II: Qualitative comparison of different hierarchies.** The results show that the output at the higher level (e.g., specific level) has more action details. Specifically, the motion level generates only coarse-grained overall motion. The action level generates local actions better than the motion level but lacks action specifics. The specific level generates more action specifics than the action level.
* **Figure III: Additional qualitative analysis of refining motion results.** We perform the following operations on the hierarchical semantic graphs: **(1)** masking the node by replacing it with the MASK token; **(2)** modifying the node; **(3)** deleting nodes; **(4)** adding a new node. The qualitative results demonstrate that our approach provides a novel method of refining generated motions, which may be meaningful and helpful to the community.
We will add the above important qualitative analysis in the final manuscript and highlight them. Thanks again for taking the time and effort on our paper.
Pdf: /pdf/6a7e439974ccc86679c945f161842db6d9c5cf50.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Robust Model Reasoning and Fitting via Dual Sparsity Pursuit | Accept (spotlight) | Summary: This paper is about model fitting. The authors consider a scenario in which it is unknown whether correspondences between points in 2 images stem from 3D points that are (1) generally distributed, (2) lie on a plane, or (3) lie on a plane and the motion between the 2 images is not projective but affine. The latter implies that not a pinhole camera was used for projection but an affine camera. This can hold if, for instance, the distance between camera and scene is large compared to the depth variation within the scene.
Case (1) amounts to the well-known fundamental matrix constraint x’^T*F*x=0 with (x, x’) being corresponding 2D homogeneous points. Here, F is a 3x3 matrix with rank(F)=2. For noise-free data, matrix F can be estimated from at least 8 point correspondences since F is determined only up to scale and each point-point correspondence yields a single equation.
Case (2) amounts to a 3x3 homography H. Differently from the F-matrix, homographies have full-rank, in general. As fundamental matrices, they are only determined up to scale. Since each point-point correspondence yields 2 equations, H can be determined from at least 4 correspondences.
For a general affine transformation between the two images, case (3) implies that the last row of H equals [0,0,1]. There are no constraints on the remaining 6 entries, in general.
The authors show that by expressing cases (2) and (3) as fitting multiple subspaces, all three cases can be expressed by the same model. They propose a projected coordinate descent type of algorithm to estimate.
Strengths: To me, this interpretation of F/H-matrix geometry is new. I cannot say whether this naturally arises from classical theorems. The fact that all papers I am aware of fit either an F-matrix or a homography can be taken as evidence that at least this fact is not generally known.
Weaknesses: - proposition 2 is wrong
Matrix Psi has rank 2 which can be seen from the fact that the third row is linearity dependent on the first two. Hence, G_H=(9,7,2,3).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please correct proposition 2.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Q: About the embedding of homography matrix and its geometric relationship.
**R:** We would like to thank the reviewer’s positive feedback.
It seems that the reviewer was concerning about proposition 1. As for proposition 1, we agree that the homography constraint derives two independent equations, since the last one is a linear combination of the first two. But this happens for DLT solution, that converts to fitting a hyperplane $\theta = vec(\mathbf{H})$. Under our subspace recovery framework, it converts to estimate subspace $\Psi(\mathbf{H})$, which is full rank of 3, thus the third basis indeed exists, and we have $G_{\mathbf{H}} = (9,6,3,3)$. This is also verified and applied in related papers [A][B] (such as Eq. (2) or Tab. 1 in [A]).
_[A] Robust Homography Estimation via Dual Principal Component Pursuit. CVPR 2020._
_[B] Efficient Deterministic Search With Robust Loss Functions for Geometric Model Fitting. TPAMI 2022._
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their explanation. I do not have any further questions. | Summary: The paper studies the geometric model fitting problem with unknown model type and heavy outliers. It proposes a unified optimization model with dual sparsity constraints that combines the outlier rejection, true model reasoning and parameter selection. Moreover, a fast numerical algorithm is proposed to solve the approximate and dimension-reduced model via separability of the objective function. Numerical experiments on synthetic and real data sets are conducted to compare the performance of the proposed method with other related works. Overall, the paper is complete from theory, algorithm, to experiments with a section of broader impact.
Strengths: 1. The proposed dual-sparsity optimization model has a certain novelty.
2. A variety of numerical results are presented to justify the proposed effectiveness.
Weaknesses: 1. The existence and uniqueness of the proposed model are not discussed in detail.
2. The description of formulation for the proposed model is not fully concise and precise, especially the treatment of the rank-term.
3. Model sensitivity and robustness could be further discussed from the theoretical aspect.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Even with the Proposition 3, model (13) is not equivalent to model (14) which is later solved inexactly. Why was the convex relaxation applied to (14) rather than (13)? Would that cause loss of accuracy especially the low rankness?
2. How about the uniqueness of the solutions for the proposed model?
3. Convergence analysis for the proposed algorithm could be provided.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The proposed work aims to avoid parameter selection by introducing a variable of parameters in the unified model. However, it leads to the introduction of other regularization parameters associated with the sparsity terms in the objective function, which again may need fine tuning. Some practical guidance for diverse data sets and application settings should be highlighted.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Q1: About problem (13), (14), and the rank-term?
**R1:** For better understanding, we should introduce problem (13) first, which is actually modeled for finding all $r$ independent sparse bases, by supposing $r$ is known. But for our unknown model fitting task, $r$ is unknown in advance, thus needs to be estimated simultaneously. For such purpose, we introduced Proposition 3, which converts the estimation of $r$ into $r = \max rank(\mathbf{X})$. Hence we integrated this into our general formulation, i.e., obtaining problem (14). This conversion is natural and necessary. And both the later convex relaxation and solution are based on problem (14), because (14) is our finally general formulation, while (13) is merely a middle formulation.
# Q2. How about the uniqueness of the solutions for the proposed model?
**R2:** We will discuss the global optimality of our proposed DSP, that implies the existence and uniqueness of solution.
Our DSP formulation actually follows the minimization problem on Grassmannian $\mathbb{G}(r,D)$. An element of $\mathbb{G}(r,D)$ can be represented by an orthonormal matrix (multiple bases) in $\mathbb{O}(r,D) = : \{ X \in \mathbb{R}^{D\times r}: X^T X = I_r\}$, which is the well-known Stiefel manifold. With this understanding, we can obtain the following parameterized problem:
$\min\limits_{X \in \mathbb{O}(r,D)} f(X)$,
where $f:\mathbb{R}^{D\times r}\rightarrow \mathbb{R}$
is locally Lipschitz, possibly non-convex and non-smooth. In our DSP formulation, we have
$f(X) = \frac{1}{2}\|M^T X -E\|_F^2 + \lambda ||X||_1 + \gamma ||E||_2$$_1$.
Since we consider problems on the Grassmannian, we use tools from Riemannian geometry to state optimality conditions. Specifically, we generalize the definition of the Clarke subdifferential and denote $\tilde{\partial}f$ the Riemannian subdifferential [D] of $f$:
$\tilde{\partial}f(X)= (I-XX^T)\nabla f(X)$.
We say that $X$ is a critical point of our DSP problem if and only if $0=\tilde{\partial}f(X)$, which is a necessary condition for being a global optima. Note that $f$ is non-smooth but convex, and the Clarke subdifferential is a nonempty and convex set since a locally Lipschitz function is differentiable almost everywhere. Thus we can conclude the global optima $X^*$ exists.
Actually, in the community of robust subspace recovery, DPCP and its follow-ups [A]-[C] have provided theoretical guarantees for modeling, the property of global optima and convergency. These researches can also provide basic theory for our DSP formulation, directly giving theoretic guidance for our own global optimality analysis.
# Q3: Convergence analysis for the proposed algorithm could be provided.
**R3:** Intuitively, because our objective is convex and we use a gradient-descent-like optimization method, it must converge into the optima under proper initialization. Specifically, given a constant step size, suppose that function $f$ satisfies the $(\alpha,\epsilon,X^*)$-Riemannian Regularity Condition. Let $\\{X_k\\}$ be generated by our DSP with step size $\mu_k \equiv \mu \leq \alpha\epsilon/\\xi^2 $ and initial $X_0$ satisfying $dist(X_0, X^*)\leq \epsilon$, where $\epsilon$ denotes an upper bound on the size of the Riemannian subgradients in a neighbrohood of $X^*$, and $\alpha>0$ denotes a scale factor.
Then, for all $k>0$, we have
$dist(X_k,X^*)\leq max\\{ dist(X_0,X^*) - \mu \alpha k/2,\mu\xi^2/\alpha \\} $,
which implies that after at most $K=2(dist(X_0,X^*)-\mu\xi^2/ \alpha)/(\mu\alpha)$ iterates, the inequality $dist(X_k, X^*)\leq \mu\xi^2/ \alpha$ will hold for all $k\geq K$, i.e.,converge on $K$. A larger $\mu$ leads to a faster decrease but a larger universal upper bound of $\mu\xi^2/ \alpha$. Papers [A]-[D] could provide more details of convergence analysis for such problem.
# Q4: About Parameter settings.
**R4:** The hyper parameters of our DSP do not need fine tuning for each scene, since our original purpose is to explore a unified method for all unknown models under different scenes. These parameters have exact physical significance, which provides a guidance for parameter settings.
* In details, $\gamma$ is used to constrain outliers, indicating that error entry $\mathbf{e}$ generates if two vectors are not vertical to some extent. In other words, a correct match should satisfy
$|cos(\theta(\mathbf{m}_i, \mathbf{x}))| <= \gamma$,
where $\theta(\mathbf{m}_i, \mathbf{x}) $ denotes the angles of two vectors. For correct matches without noise, we have $\mathbf{m}_i^T\mathbf{x} = 0$, i.e., they are vertical.
Under noise case, we set $\gamma = 0.05$, which allows an inlier have $\theta(\mathbf{m}_i, \mathbf{x}) \in [88.85^{\circ},91.15^{\circ}]$.
We also follow the idea of simulated annealing, during optimizing, we dynamically decrease $\gamma = 0.98\gamma$ for each 20 iterations and set $\gamma_{min} = 0.02$.
* As for parameter $\lambda$, it balances the fitting error and model complexity. Prof. Philip HS Torr has studied this in paper ''Geometric motion segmentation and model selection'', and concluded that under the assumption of independence among matches, the optimal estimation is merely contributed by 4 noisy coordinate values of each match $(u_i , v_i , u_i' , v_i ')$, thus the parameter item and error item would differ by a scale factor of $log(4N)$.
* Threshold $\tau$ is used to control the estimation of basis number $r$, Fortunately, the optima value of $\tau$ can derive from the statistical analysis result as shown in Fig. 3.
_[A] Dual Principal Component Pursuit:Improved Analysis and Efficient Algorithms, NeurIPS 2018._
_[B] A Linearly Convergent Method for Non-Smooth Non-Convex Optimization on the Grassmannian with Applications to Robust Subspace and Dictionary Learning, NeurIPS 2019._
_[C] Dual Principal Component Pursuit for Robust Subspace Learning:Theory and Algorithms for a Holistic Approach, ICML 2021._
_[D] Subgradient Descent Learns Orthogonal Dictionaries, ICLR 2019._
---
Rebuttal 2:
Comment: Dear Reviewer WgUj,
Thank you for being a reviewer for NeurIPS2023, your service is invaluable to the community!
The authors have already submitted their feedback and I noticed that you don't appear to have submitted a new round of comments.
Could you examine rebuttals and other reviewers' comments, and open up discussions with the authors and other reviewers?
Regards, Your AC
---
Rebuttal 3:
Comment: All my comments have been addressed and no more further questions pop up at this point, so I will raise the score. | Summary: This paper considers the robust model fitting problem in the presence of outliers, which is a fundamental problem in low-level CV. The aim is to simultaneously achieve outlier rejection, model selection, and model parameter estimation in a unified formulation. Toward this end, the authors propose to cast the joint outlier rejection, model selection, and model estimation problem into a sparse subspace recovery problem, which can cover the widely used projective transformation models for multi-view gemometry such as the fundamental, homography and affine models. The joint optimization formulation is solved by an alternating algorithm with the use of proximal approximation computation and acceleration. Experimental results on synthetic and real-word data have been provided to deminstrate the performance of the new method, including fundamental matrix and homography estimation, as well as a loop closure application.
Strengths: This work is well motivated to joitnly achieve outlier rejection, model selection, and model estimation in a unified formulation. The proposed sparse subspace recovery formulation covers the widely used projective transformation models for multi-view gemometry, such as the fundamental, homography and affine models. The method has been evaluated on both synthetic and real-world data via various experiments.
Weaknesses: 1. The formulation only applies to algebraic error model, i.e., model estimation with algebraic distance. While algebraic distance is convenient due to its linearity, geometric distance is geometrically or statistically meaningful and can yiled better performance in projective transformation estimation over the algebraic distance. This has been demonstrated in multi-view geomety.
2. Wile the proposed method is well conceived, it would not outperform the simple method that first estimates the model parameters of each candidate models and then selects the best model in terms of the fitting error. Although the provided experiments show that it outperforms AIC, BIC, GRIC selection methods, in practical applications the model is typically selected in terms of a score of the estimated model computed based on the symmetric transfer errors when the ground-truth transformation is unknown. This selection method is typically used in practice.
3. The proposed algorithm has several hyper-parameters, e.g., $\gamma$, $\lambda$, $r$, and $\tau$. Its performance depends on the tunning of these hyper-parameters, which diminishes its potential advantage over the simple model selection method based on the symmetric transfer errors.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: It is claimed that the objective with geometric error is extremely hard to optimize due to the highly non-linear nature, but in fact there exist well developed methods in the literature of multi-view geomety that can solve the formualtion with geometric error efficiently and effectively.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have not addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Q1: About using geometric distance.
**R1:** We highly agree that minimizing geometric error (GE) could obtain better performance in accuracy, since the error entry with geometric distance is more stable. But the non-convex nature indeed makes it hard to optimize. Although there exist effective methods to search the optima, it usually consumes complex computation and long execution time, thus rarely used in real applications, such as SLAM, 3D Reconstruction, etc.
To our knowledge, GE is commonly converted into a tractable form, such as a generalized fractional form (GFF) or Sampson Distance (SD, i.e., first order approximation). For instance, Ref. [A] studies approximate solutions for GE-based objective using GFF for each match $(p,p’)$ like:
$GE(p,p’) = \frac{f(p,p’)} {g(p,p’)}$, where $f$, $g$ are linear functions.
In Sec. 6.2.1 of [A], the authors also admitted that _''finding least squares estimates based on geometric distances is intractable''._
Actually they merely explored GE minimization for Homography and Affine matrix, as the authors explained in Sec 6.1.2 _''Fundamental matrix estimation does not have the GFF'',_ hence they still used Algebraic Error (AE) to estimate $F$ matrix.
As for SD, we take fundamental matrix $F$ as example:
$GE(F, p, p’) = \frac {(p’^TFp)} {(F^Tp’)^2_1 + (F^Tp’)^2_2 + (Fp)^2_1+ (Fp)^2_2} $.
Obviously, using SD to estimate $F$ is not easy, at least more difficult than using AE. In fact, under consensus maximization framework, such as RANSAC, GE is typically used to count inliers with pixel threshold. But the model parameters are still estimated by AE. Recently, SD often serves as loss to guide the training of those deep matching methods.
In addition, by reading the experimental results from Tab. 3 and Tab. 4 of [A], we clearly find, GE-based optimization would take $>$5000ms to process around 500 point matches with 50% inlier ratio. While our DSP costs only 183ms in average for Hpatch datasets (average match number 3714 and average inlier ratio 23%), and merely causes 0.82 pixels Geometric Error. Paper [B] (Fig. 6) also reveals that, GE-based method (IBCO) [C] costs huge runtime.
Another critical point is that, the core motivation of our manuscript is to explore a unified modeling and efficient solution for unknown model fitting. But, the GE forms would vary for each specific model, thus hard to be integrated into our unified objective to achieve model reasoning.
Considering both the practicability and theory convenience, it is better to use Algebraic Error in our DSP. Surely, if the reviewer has any efficient and effective methods/literature to solve GE-based problems, please recommend them to us. It would help to advance our dual sparsity formulation in the future, i.e., modeling with GE and achieving better accuracy while maintaining the real-time property.
_[A] Deterministic approximate methods for maximum consensus robust fitting. TPAMI 2021._
_[B] Efficient Deterministic Search With Robust Loss Functions for Geometric Model Fitting. TPAMI 2022._
_[C] Deterministic consensus maximization with biconvex programming, ECCV 2018._
# Q2: Details of using GRIC.
**R2:** In our experiments, we did use geometric error to support AIC, BIC and GRIC to conduct model selection. In details, for all comparing methods that use AIC, BIC, or GRIC for model identification, we first utilize those robust estimators (including RANSAC, USAC, MAGSAC++, EAS, OANet, SuperGlue) to estimate each model $\mathcal{M}$, then use $\mathcal{M}$ back to compute the geometric error for each match pair. In our experiments, we use Sampson Distance as geometric error with Matlab toolbox.
# Q3: About the difficulty of tuning hyper parameters $\gamma, \lambda, r$ and $\tau$.
**R3:** The hyper parameters of our DSP do not need fine tuning for each scene, since our original purpose is to explore a unified method for all unknown models under different scenes. These parameters have exact physical significance, which provides a guidance for parameter settings.
* In details, $\gamma$ is used to constrain outliers, indicating that error entry $\mathbf{e}$ generates if two vectors are not vertical to some extent. In other words, a correct match should satisfy
$|cos(\theta(\mathbf{m}_i, \mathbf{x}))| <= \gamma$,
where $\theta(\mathbf{m}_i, \mathbf{x}) $ denotes the angles of two vectors. For correct matches without noise, we have $\mathbf{m}_i^T\mathbf{x} = 0$, i.e., they are vertical.
Under noise case, we set $\gamma = 0.05$,
which allows an inlier have
$\theta(\mathbf{m}_i, \mathbf{x}) \in [88.85^{\circ},91.15^{\circ}]$.
We also follow the idea of simulated annealing, during optimizing, we dynamically decrease $\gamma = 0.98\gamma$ for each 20 iterations and set $\gamma_{min} = 0.02$.
* As for parameter $\lambda$, it balances the fitting error and model complexity. Prof. Philip HS Torr has studied this in paper “Geometric motion segmentation and model selection”, and concluded that under the assumption of independence among matches, the optimal estimation is merely contributed by 4 noisy coordinate values of each match $(u_i , v_i , u_i' , v_i ')$, thus the parameter item and error item would differ by a scale factor of $log(4N)$.
* $r$ is not a hyper parameter, but a core integer variable that we need to estimate. It indicates the maximum number of basis of recovered subspace or geometric model. In our DSP, we constrain it with rank maximization and solve it progressively.
* The estimated basis number $r$ is controlled by threshold $\tau$, but fortunately, the optima value of $\tau$ can derive from the statistical analysis result as shown in Fig. 3.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed most of my previous concerns, so I raise my rating to 6. | Summary: Considering that existing model estimation methods highly rely on the correct definition of model types, this paper introduces a unified optimization modeling DSP to simultaneously reason out the model type and estimate model parameters from contaminated data. For such purpose, the authors proposed Sparse Subspace Recovering (SSR) theory to modeling geometric model estimation task, that is to search a maximum of independent sparse bases under an over-embedded data space. The authors also introduced a fast and robust solver to estimate the sparse subspace parameters and error entries, and validated the advanced performance of their method on both unknown and known model estimation, and two applications.
Strengths: - This paper is well written and organized, and easy to follow.
- The motivation, theory, formulation and solution are good contributions for the model fitting topic.
- The authors solved the geometric model estimation problem from a novel perspective, that considers the model reasoning task additionally. Particularly, the authors introduced Sparse Subspace Recovering theory and formulated the unknown model fitting task into a continuous optimization objective, and explored efficient solution for it.
- The authors designed unknown model fitting experiments, and also evaluated their method on common exact model fitting task, which is reasonable and credible. The experiments are convincing, and the results show great superiority comparing to the SOTA.
Weaknesses: - Line 74, the authors claimed that if the data are properly normalized, using algebraic error is good. But how to assure this property of input data?
- This paper proposes SSR theory, but the authors have not given mathematical explanation or proof.
- As for the solution, why not using ADMM to solve this problem, I think ADMM is a common choice for solving L1 norm problem with Lagrange multiplier, as sparse subspace clustering (SSC) used. Please explain.
- Line 52: How to understand ``insufficient information’’ that the GRIC used, which may cause wrong model selection for constrained motions. What are the advantages of this method comparing with those model selection criteria.
- If using DSP or SSR theory to estimate a Fundamental matrix F, how to use the rank 2 constraint of F? That is an intrinsic property of Fundamental matrix.
Typos:
- Line 60, add citation for PSGM when it first appears.
- Lines 163 and 171, ``sparse independent hyperplanes``, I think it is better to use ``sparse intersected hyperplanes`` or ``sparse independent bases``.
- Line 170, $\mathcal{R}$ should be $\mathbb{R}$ to indicate the real number space.
- Line 226, $e_k$ --> $e^k$.
- Line 239, ``can successfully addressing`` --> can successfully address.
- Line 271, ``300 image pairs of each model`` --> ``300 image pairs for each model``.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Explain how to ensure the input data properly normalized.
- Please provide mathematical explanation or proof for your SSR theory.
- Please explain or validate the necessary of the used solution, or compare it with ADMM.
- Please emphasize the advantages of your DSP comparing with existing model selection strategies, such as GRIC.
- Correct the typos, see [Weaknesses]
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, the authors have analyzed the limitations and potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Q1: How to ensure the input data properly normalized?
**R1:** The related distribution of the input data is their intrinsic nature, we cannot change it. But to ensure the estimation easier, we first normalized the input points of each image into zero mean and one standard error, then we scaled each point into length 1 using $\mathbf{m}_i \leftarrow \mathbf{m}_i/||\mathbf{m}_i||_2 $ .
# Q2: Mathematical explanation of Sparse Subspace Recovery (SSR) theory.
**R2:** Our SSR theory claims that ‘’Geometric model fitting can be seen as a subspace recovery problem represented by the intersection of multiple sparse hyperplanes under an over embedded data space.’’, which can be interpreted by the following mathematical form:
Suppose we are given a set of clean data $\mathcal{S} =\\{ \mathbf s_i\\}_{i=1}^N$,
sampled from a geometric model $\mathcal{M}$,
which can be parameterized by several bases $\\{\theta_j\\}_{j=1}^r$.
On this basis, we can obtain compact constraint $f_j(\mathbf{s}_i)^{T}\theta_j = 0,~j = 1,2,\cdots,r$,
where $f_j(\mathbf{s}_i)$ is a specific embedding for point $\mathbf{s}_i$.
Our SSR theory suggests that, it can be extended into $\{[f_j(\mathbf{s}_i)^{T}, g_j(\mathbf{s}_i)^{T}][\theta_j^T,\mathbf{0}^T]^T=0,~j=1,2,\cdots,r\}$. Here $g_j(\mathbf{s}_i)$ is another data embedding that is not contained in $f_j(\mathbf{s}_i)$. $\mathbf{0}$ is an all zero vector with length equal to $g_j(\mathbf{s}_i)$. For example, a line model derives $f(\mathbf{s}_i)= [x,y,1]^T, \theta = [a,b,c]^T$, with $\{f(\mathbf{s}_i)^{T}\theta = 0 \}$ under ideal case, note that here $r= 1$, thus we ignore subscript $j$. Then we generate $g(\mathbf{s}_i) = [xy, x^2,y^2]^T$, and get an over embedded space $F(\mathbf{s}_i)=[f_j(\mathbf{s}_i)^{T}, g_j(\mathbf{s}_i)^{T}] = [x,y,1,xy, x^2,y^2]$, and $\theta' = [\theta,0,0,0]^T$, similarly having $F(\mathbf{s}_i)^{T}\theta ' =0$.
Based on the argument embedding $g_j(\mathbf{s}_i)$, we can construct a common embedding $F(\mathbf{s}_i)^{T}$ for multiple bases, such that $\{F(\mathbf{s}_i)^{T}\theta_j' = 0, j =1,2,\cdots, r\}$, then obtain our general formulation Eq. (9) for unknown model fitting, i.e., $ \widetilde{\Phi}(\mathbf{s}_i)^T\Psi(\mathcal{M})= \mathbf{0}$.
SSR theory can be proven easily. For each basis, since $f_j(\mathbf{s}_i)^{T}\theta_j = 0$ holds, $g_j(\mathbf{s}_i)^{T}\mathbf{0} = 0$ equally holds, thus we have
$\{[f_j(\mathbf{s}_i)^{T}, g_j(\mathbf{s}_i)^{T}][\theta_j^T,\mathbf{0}^T]^T = 0,~j = 1,2,\cdots,r\}$.
# Q3: Why not using ADMM to solve this problem?
**R3:** Using ADMM to solve our DSP problem requires complex computation, including the inversion operation for large matrix, which creates long execution time. Specifically, ADMM suggests an auxiliary variable $Z$ to simplify nonlinear constraint, and we have:
$\min\limits_{(X,E,Z)} \frac{1}{2} ||M^TX-E||_F^2 $
$+\gamma ||E||_2$$_1 +\lambda ||Z||_1$
$ s.t.~X^TZ = I ,X = Z.$
Then, the Augmented Lagrangian expression is:
$L= \\frac{1}{2} ||M^TX-E||_F^2$
$+\gamma \\|E\\|_2$$_1+ \lambda \\|Z\\|_1$
$+ <\Lambda_1,X^TZ - I> + <\Lambda_2, X-Z> + \frac{\mu_1}{2} ||X^TZ - I||_F^2 + \frac{\mu_2}{2} ||X-Z||_F^2,$
$=\frac{1}{2}||M^TX-E||_F^2 +\gamma ||E||_2$$_1+ \lambda \\|Z\\|_1$
$+\frac{\mu_1}{2} ||X^TZ - I + \frac{\Lambda_1}{\mu_1} ||_F^2 + \frac{\mu_2}{2} ||X-Z+\frac{\Lambda_2}{\mu_2}||_F^2 + const.$
Letting $\frac{\partial L}{\partial X} = 0$,
we have the update formula for $X^{k+1}$ at given $(E^k,Z^k,\Lambda_1^k,\Lambda_2^k,\mu_1^k,\mu_2^k))$,
$ X^{k+1} = [MM^T + (\mu_1+\mu_2) I]^{-1} [ME^{k} + \mu_1^k (I-\frac{\Lambda_1^k}{\mu_1^k})Z^T + \mu_2^k(Z - \frac{\Lambda_2^k}{\mu_2^k} )] $
which is time consuming. In addition, parameters $(E^k,Z^k,\Lambda_1^k,\Lambda_2^k,\mu_1^k,\mu_2^k))$ also need update.
# Q4: Our DSP V.S. GRIC.
**R4:** GRIC is an early strategy to tackle unknown model fitting problem, but still widely used in many vision tasks. Given the model pool $\mathcal{M}$, traditional pipeline first uses existing robust estimators, such as RANSAC, to estimate the parameters for each model in $\mathcal{M}$, then uses GRIC to select the “best” one as final output. Using this estimation-then-selection strategy, the time consumption would be much large if the model pool is huge. And this greedy strategy would easily cause wrong identification, as revealed in Tab. 1. On the contrary, our proposed DSP fully considers the forms under sparse subspace, and specifically models the noise and outliers, thus solving three subproblems in a unified optimizing paradigm, such that our DSP achieves better accuracy and efficiency.
# Q5: How to ensure rank 2 constraint of F model.
**R5:** During our optimization, we do not consider the rank 2 constraint for $F$ at the beginning, that would obtain a full rank matrix $F'$. Then we decompose $F' $ with SVD, getting $F'= U diag(s1,s2,s3)V^T$, the final estimation of rank-2 form would be $\hat{F} = U diag(s1,s2,0)V^T$. This process is commonly done in many robust estimators such as 8-point algorithm.
# Q6: Typos.
**R6:** We will correct them in our next version.
---
Rebuttal 2:
Comment: All of my concerns have been thoroughly addressed in the rebuttal. In general, the authors have presented a novel perspective in addressing the problem of geometric model estimation. Notably, their contributions lie in the effective modeling of unknown model fitting and the efficient exploration of solutions. The writing and experimental part are also commendable. Therefore, I wholeheartedly recommend accepting this manuscript. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the task of robust model reasoning and fitting in an unified optimization framework that can estimate the geometric model accurately without knowing the predefined model in advance while being robust to outliers as well as highly efficient. The authors propose a novel sparse subspace recovery theory and derive corresponding propositions, thus giving a general and unified formulation for robust model reasoning and fitting. They next introduced an alternating optimization strategy together with proximal approximation method to accurately estimate the sparse model parameters and outlier entries. Extensive experiments indicate that the proposed method outperforms the selected comparison methods.
Strengths: This paper solved the geometric model estimation problem from a novel perspective, that is recovering the model parameters without knowing the model type, which is interesting and valuable. The writing is good. The authors have provided clear presentation to convey their core idea. The experiment part is reliable, which well verifies the advantages of the method.
Weaknesses: 1:The authors designed an accelerated optimization approach and demonstrated its fast convergence in Fig. 2, claiming it an ‘optimal’ first order method. But there are no experimental proofs of the optimality in the paper. Authors should compare the performance of the algorithm before and after the acceleration.
2:The authors mentioned their strength in efficiency, but the experiment results have not revealed this property.
3:The authors propose a solution process similar to DPCP, and I think it should be added to the comparison algorithms as well.
4:The experiments in the paper demonstrate the excellent performance of the DSP on 2D models and geometry models. I would like to know is the upper bound of the algorithm. Is it possible to design experiments (e.g., continuously increasing the dimension of the data space) to test the performance upper bound.
5:During application test, the proposed method have successfully applied to multimodel fitting, as we can see. But why do the authors mention that they plan to integrate multimodel fitting and achieve a four-fold task in the future?
6:The pose estimation or visual localization experiments are also necessary, since they are more common in this topic, comparing with multimodel fitting or Loop Closure Detection.
7:Others:
Line 113, “to predefine” should be “predefining”
Line 159, “convert” should be “be converted”
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to [Weaknesses]
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have solved most limitations of their method, and they also mentioned potential negative social impact, which are acceptable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Q1: About the optimality if using the Acceleration Strategy (AS).
**R1:** The optimality has not changed if using AS. We can see from Fig. 2 that, at convergence stage, solution or loss value are identical for using AS or not. This is because our used AS is not an approximation for original problem that may discard partial accuracy. Instead, it borrows the concept of Momentum or Nesterov method in gradient decent, to use an adjusted startpoint at each update step. It was proven in [7][30][31] that this strategy can achieve $O(1/k^2)$ convergence rate with theoretic guarantees, that is an ‘‘optimal’’ first order method for smooth problem.
# Q2: Emphasize the efficiency.
**R2:** We will explain the strength of efficiency as follows. First, EAS [TPAMI 2022] and our DSP are both global optimizing paradigms, but our DSP just consumes half time than EAS, due to the use of AS. Second, SAmpling Consensus (SAC) methods require sufficient time budget to hit an all-inlier subset, which is highly related to outlier ratio and noise scale. Thus, our DSP is faster for the cases of high outlier ratio, and this trend is well revealed in Fig. 3 of Supp. Mat. As for USAC, it integrates the local optimization and fast model verification in a universal framework, thus achieving fast model estimation, but most of the results are coarse or incorrect. Third, two deep methods just perform one forward propagation once trained, and they are accelerated with GPUs, thus obtain the best efficiency.
**Without GPUs, the average execution time of OANet increases from 13ms to 117.6ms (our DSP: 70.3ms)**. In addition, compared with above methods, our DSP can estimate geometric model from contaminated data without predefining correct model type.
# Q3: Comparing with DPCP
**R3:** Our solution is merely partially similar to DPCP. DPCP provides plain formulation and solution for robust subspace learning with theoretic guarantees. But applying it to geometric model needs additional consideration, such as in DPCP-H [CVPR2020] and EAS [TPAMI2022]. In particular, EAS explores more general formulation and efficient solution for each model type, thus we take it as a representative method of DPCP and select it for comparison. As for the optimization process, because our DSP additionally considers the constraints of noise and the complexity of subspace, our formulation is more complex and hard to optimize. Thus, we explore an alternating optimization framework with proximal approximation strategy to accurately estimate the sparse subspace and outlier entries. To tackle the constraint terms, we use the similar strategy as DPCP, that performs sphere projection and orthogonal projection for orthogonal constraint $X^TX = I$.
# Q4: Applying to high-dimensional model.
**R4:** Our method mainly focuses on geometric models, particularly two-view geometry, to reason out model type and estimate model parameters from corrupted data. Thus, applying to recover high-dimensional or more complex models still need further exploration.
# Q5: About multimodel fitting.
**R5:** Our DSP have not realized multi-model fitting in a unified formulation yet, instead it asks for density-based clustering method to obtain several coarse clusters first, then apply our DSP for each cluster to reason out accurate model type and parameters, thus largely enhancing the fitting performance. In this regard, we think it is valuable to integrate estimating model number into our DSP formulation to achieve a four-fold task in the future.
# Q6: Evaluation for pose estimation or visual localization.
**R6:** In our experiment, we have conducted Fundamental matrix and Homography matrix estimation, which can directly reveal the performance of these two mentioned applications. Because the camera pose is directly decomposed from estimated $F$ matrix, i.e., with known camera intrincics $K_1, K_2$, the relation between $F$ and camera pose $R, t$ is $[t]_{\times}R = E = K_2^{-T}FK_1^{-1}$.
Following the reviewer's suggestion, we also conduct pose estimation on large scale dataset YFCC100M. This dataset has ground truth poses and sparse models obtained from an off-the-shelf SfM tool. We follow the setting in OANet (Zhang et al., 2019) and choose 4 sequences for evaluation. We detect up to 2k matches for each data with SIFT. Similar to OANet, we use AUC of the pose error at threshold $(5^{\circ}, 10^{\circ},20^{\circ})$ for evaluation, where the pose error is defined as the maximum of the angular errors in rotation and translation. Our DSP can obtain the best performance among those handcrafted methods.
| Method | $@5^{\circ}$ | $@10^{\circ}$ | $@20^{\circ}$ |
| ---- | ---- | ---- | ---- |
| RANSAC| 3.47 |9.10 | 18.60 |
| USAC | 5.67 | 9.53 | 13.41 |
| MAGSAC++ | 11.80 | 19.72 | 33.15|
| EAS | 12.18 | 22.22 | 35.67 |
| DSP(Ours) | 14.43 | 25.32| 37.73|
---
Rebuttal 2:
Comment: In the rebuttal, the authors have effectively addressed my concerns regarding their proposed acceleration strategy, the design of comparative methods, and the applications. These responses have provided me with a deeper understanding of the method. And, I believe that the proposed theory of unknown model fitting and its corresponding solution will make good contributions to the field of computer vision. Therefore, I keep my rating as 7. | null | null | null | null | null | null |
The Tunnel Effect: Building Data Representations in Deep Neural Networks | Accept (poster) | Summary: The paper offers an empirical study of deep neural networks. The focus is on the role of intermediate layers in building a representation that is linearly separable and can eventually solve the task. The work highlights the fact that this linearly separable, low-rank representation emerges at a depth that is a fraction of the total depth (usually 1/3 to 2/3 for common image classification setups). Layers before such point are named "the extractor", and layers afterward "the tunnel". The authors further investigate the role of "the tunnel effect" on transfer learning and catastrophic forgetting.
Strengths: The main strength of the paper is the thoroughness of the empirical study. Although only pertaining to image classification (as mentioned by the authors), the study includes a wide range of benchmark datasets and architectures, and shows results with plots that are easy to parse.
Connecting the tunneling effect to practical recommendations for transfer learning is another interesting point.
Weaknesses: Although the authors put a strong emphasis on the novelty of their results, the fact that a linearly separable representation emerges well before the final layer is not completely novel. See e.g. [1] for an analysis using similar methods.
Also, the reasons behind the so-called paradox that motivates the work are not completely clear. It's not evident how the fact that capacity increases with depth, is at stake with the fact that there is an "inductive bias" toward simple solutions.
In my view, the motivation paragraph in the introduction would need to be clarified, and the conclusion should clearly state how this paradox has been addressed by the paper.
[1] Feature learning in deep classifiers through Intermediate Neural Collapse, Rangamani, Akshay; Lindegaard, Marius; Galanti, Tomer; Poggio, Tomaso.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Section 3.1 put emphasis on the fact that the tunneling effect emerges "early" during training. How do you assess what early is? Is the effect happening before/same time/after train and test losses start decreasing? It would be nice to have a comparison of these different timescales to better assess the claim of the section.
- In the introduction, the authors claim to challenge the idea that deeper layer capture more complex and task-specific features. What are the results that question this idea? For example, the observation that the transfer learning performance deteriorates after neural collapse (i.e. in the tunnel part) is consistent with the idea that the representation in these layers becomes task-specific. The fact that more complex tasks require a longer "extractor" is consistent with the idea that more complex tasks require more complex representations to be built and more complex representations are built in a hierarchical manner through more layers.
- I find the results of Figure 9 exemplificative of the fact that the network uses always the first layers to solve the task, independently of its total depth. It would be nice to have the same Figure for a CNN.
- Minor: What is the difference between blue and orange dots in Figure 10 (top).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors clearly highlight that their work focuses only on image classification and that a study for other data modalities would be interesting. I think this is a fair limitation and does not impact my evaluation of their work.
Also, at first sight, the role of skip connection would be very relevant for the effect studied in this work, and possibly it would deserve to be discussed more in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review.
**The fact that a linearly separable representation emerges well before the final layer is not completely novel.**
We do agree that this observation has been made in multiple works, including those we cite. We consider the primary value of our work to be *conceptualization and providing a coherent perspective of how representations are formed*. In particular, we do not mean to claim that all observations are new. We instead attempt to build standalone empirical evidence to support our tunnel hypothesis and explore its consequences in transfer learning or continual learning scenarios. We have added the reference to the Related Works section:
>Several recent works~\cite{ansuini2019intrinsic,li2022principled,rangamani2023feature} have extended the observation of NC and explored its impact on different layers, with a notable emphasis on deeper layers. \cite{li2022principled} establishes a link between collapsed features and transferability. In our experiments, we delve into tunnel creation, analyzing weight changes and model behavior in a continual learning scenario, revealing the task-agnostic nature of the tunnel layers.
**Clarification of the so-called paradox.**
We acknowledge that using the term “paradox” may have caused confusion, and we have revised the introduction and conclusion to eliminate any ambiguity. The revised paragraph in the introduction now reads as follows:
The revised paragraph in the introduction now reads as follows:
> Extensive research has ..... to simplify representations with increasing depth [38,7]. This phenomenon arises because, despite their large capacity, these networks strive to compress hidden representations by focusing on discriminative patterns during supervised training [38, 7, 8, 30]. Motivated by these findings, we aim to investigate this phenomenon further and build a comprehensive picture to answer the following research question:
>
> "How do representations depend on the depth of a layer?"
We add the following paragraph to the conclusions:
> We emphasize that our motivation for investigating this (i.e., tunnel) phenomenon is to build a coherent picture encompassing our experiments and evidence in the literature. Specifically, we aim to understand better how the neural networks handle the representation-building process in the context of depth.
**In the introduction, the authors claim to challenge the idea that deeper layer capture more complex and task-specific features. What are the results that question this idea?**
We recall the following facts from our study
1. Experiments show that the tunnel layers, with decreased out-of-distribution (OOD) performance, have minimal contribution to in-domain (ID) performance (Figure 1, 2, 7). [roughly speaking the representations 'do not change' in the tunnel as it comes to the performance]
2. The tunnel layers' representations are identical (measured via CKA), indicating a lack of meaningful task-specific features (Figure 4) [roughly speking, the representations do not change in the tunnel when measure with CKA]
3. Experiments demonstrate that the tunnel remains unchanged when optimizing the network for a new task, allowing for transferability across tasks without retraining (Table 4). [i.e. the tunnel is task-agnostic]
Perhaps the current phrasing is somewhat misleading. Would you find the following better:
> The deeper layers (i.e. the tunnel) concentrates on compressing the representations instead of building more complex ones.
**The role of skip connection would be very relevant and deserve to be discussed more in the main text.**
We agree that skip-connections seem to play a very relevant role in this phenomenon. In the updated version of the paper, we included the study about the role of skip connections in the main text.
**The early emergence of the tunnel**
The tunnel effect develops gradually, making it difficult to precisely identify the specific step or epoch when it becomes noticeable from our standpoint. However, regarding rank, we can observe a sudden decrease within the initial training steps, as depicted in Figure 6. To gain further insights, we conducted additional experiments using linear probes attached to the network at various training stages. The results in Figure 4 of the attached PDF indicate that the accuracy saturates in deeper layers as early as the 10th epoch despite an overall gradual improvement in accuracy in later epochs. These findings suggest tunnel formation occurs within 5% of the training process.
**I find the results of Figure 9 exemplificative of the fact that the network uses always the first layers to solve the task (...). It would be nice to have the same Figure for a CNN.**
We provide the analogous Figure for the VGG-based networks of various depths -- see Figure 2 in the attached PDF file.
**Minor: What is the difference between blue and orange dots in Figure 10 (top).**
In this scenario each network (of various depth) was trained on a sequence of two tasks. The blue and orange colors depict the performance of the models after training on the first task and second task respectively. We'll update the Figure to include this information in the legend and caption.
If your concerns have been sufficiently addressed in our responses, we humbly seek your support for the paper and ask you to consider improving your score. Should you have any further concerns or additional points to raise, we are eager to address them.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers.
The picture that emerges from this work seems to suggest that, in the first part of the network, deeper layers capture more complex and task-specific features, while this is not the case for the tunnel part, which is mainly used to compress the representation and is not crucial for accuracy.
I raise my rating to 6.
---
Reply to Comment 1.1.1:
Comment: Again, we thank the reviewer for helping us raise this paper's quality.
Should you have any other questions, we are happy to answer them. | Summary: The tunnel effect is described for deep overparameterized networks, whereby early layers form a linearly separable representation while later layers form a "tunnel" which passes this representation to the output without substantial change, other than compression (reducing its rank, a.k.a. discarding information). A large number of experiments show that this effect occurs in a variety of different models, datasets, and training times, and impacts out-of-distribution and continual learning settings. The effect implies that the capacity of a given architecture/dataset is fixed.
Strengths: This paper provides a clear and comprehensive argument for both the existence of its hypothesis and its effects on several relevant subdomains of DNN research. The experiments are thorough, well organized, and address many potential concerns (such as consistency of the observations across different models and datasets). Some subtle details are handled correctly (e.g. use of unbiased CKA estimator in appendix E). The effect is convincing and could point to significant improvements in how transfer and continual learning is handled.
Weaknesses: The procedure for computing the numerical rank should be given fully (e.g. the value of the threshold $\sigma$ is not explicitly stated). In particular, figure 6 indicates some issues (see Questions below). Also, figure 6 is missing shading for the tunnel region after training, which would help readers reference other figures.
The difference in weights in figure 5 is not fully explained (e.g. $\tau$ should be defined). Also, while the difference is normalized relative to $\theta^0$ (the initial weight norm?, or is it the number of weights?), a better comparison would be relative to the norm of the mean change iterations/epochs, since the learning rate changes over training. As a result, the experiment cannot rule out the possibility that the tunnel is changing relative to other layers after the learning rate is reduced.
Overall the evidence from CKA is lacking (figure 4) as it is only shown for a MLP. However, this is not a central piece of evidence.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In figure 6, why is the numerical rank for higher layers already less than that of lower layers at initialization? Given that random matrices tend to be near full-rank, this points to an issue with how numerical rank is computed (e.g. the threshold for determining rank should be layer-dependent, since different layers have different weight scales).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: ImageNet-scale networks and Transformer architectures are out of scope of this work. But given the importance of such networks to transfer learning applications, the obvious next step is to investigate the tunnel effect in these networks. Unlike the CIFAR/CINIC-scale models in this work, it is unclear what degree of overparameterization is occurring in ImageNet-scale networks and Transformers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the effort you put into reviewing our work. We find your feedback valuable and helpful in improving the quality of the paper!
**The procedure for computing the numerical rank should be given fully.**
We added the following clarification in the text:
> The threshold $\sigma$ is set to $\sigma_{1}*1e-3$, where $\sigma_{1}$ is the highest singular value of the particular matrix.
**Also, Figure 6 is missing shading for the tunnel region after training, which would help readers reference other figures.**
We added shading in Figure 6.
**The difference in weights in figure 5 is not fully explained (e.g. should be defined).**
We fixed the notation in the caption of Figure 5 and explained the formula in more detail.
>The tunnel layers stabilize early during training. Color depicts the norm of weights difference between subsequent checkpoints (rows) for a given layer (columns). Norm is computed with $\frac{1}{\sqrt{n m}}\left\|\theta_d^{\tau_1}-\theta_d^{\tau_2}\right\|_{2^{\prime}}$ where $\theta_d^\tau \in \mathbb{R}^{n m}$ is flattened matrix of weights from $d$-th layer at checkpoint $\tau$. The values are clipped at $0.02$ for better presentation. At epochs 80 and 120, the learning rate is decayed. VGG-19, CIFAR-10.
**The experiment cannot rule out the possibility that the tunnel is changing relative to other layers after the learning rate is reduced.**
Thank you for pointing out that possibility. To check whether tunnel formation is reversed after decaying the learning rate, we split Figure 5 into three parts, with the scale adjusted by the decay factor. The results in Figure 1 in the attached PDF align with the previously presented material. Despite the decaying learning rate, the model's behavior is consistent throughout the training, and the split between the extractor/tunnel part is visible in all phases. We will update the text and Figure 5 accordingly in the paper.
**The CKA evidence is limited to an MLP in Figure 4, which is not a pivotal part of the argument.**
In Figure 3 in the attached PDF, we present the plots for CKA similarities for VGG-19 and ResNet-34 (we also added the Figure to the paper). VGG-19 exhibits similar representations for deeper layers (starting around layer 8). In the case of ResNets, the pattern is less visible. We suspect this can be caused by skip-connections and their impact on the final representations, as mentioned in the last paragraph of the Limitations section.
**In Figure 6, why is the numerical rank for higher layers already less than that of lower layers at initialization?**
As we explained earlier, the threshold $\sigma$ used for estimating the numerical rank is computed separately for each layer based on their top-singular value making it layer-dependent. Also, please note that the dimensionalities of the lower-layer representations are higher. Due to computational constraints, we subsample these layers to have at most 8000 features (e.g. [1]). Yet, the effect of decreased rank in upper layers is also present in MLPs where the dimensionality of the representation is constant across layers, and we do not subsample representations.
[1] G. Alain, Y. Bengio - Understanding intermediate layers using linear classifier probes
If your concerns have been sufficiently addressed in our responses, we humbly seek your support for the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. After reading the other comments, I believe this paper provides solid evidence but some of its observations are known in the literature. Nevertheless the original method of replication is valuable, and I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for your work towards making our paper better. | Summary: This paper proposes the Tunnel Hypothesis: Training of deep networks splits layers into two distinct phases: (1) extractor phase and (2) tunnel phase. Extractor phase learns the linearly separable features whereas the tunnel phase compresses the representations. The authors provide evidence towards degrading effects of this tunnel effect on OOD (out-of-distribution) samples. Further, better understanding of continual learning may be possible due to the proposed hypothesis.
Strengths: The paper has following strengths:
1. The OOD representation section was very interesting.
2. Applications to continual learning based on knowledge gained from the detailed analysis from this paper can also be useful.
3. Overall, a lot of work has gone into this paper (many experiments).
Weaknesses: The paper has following weaknesses:
1. Many of the observations are not particularly surprising. I think a lot depends on capacity of the network and the difficulty of the task at hand. It is not surprising that for a given task and a type of network, representations get learned up to a certain layer and then the remaining layers simply make the representations more compact. Indeed, if the task becomes more complex or more difficult, the tunnel length would reduce (since more layers would be spent trying to learn more complex features). This is clear from Table 1 where for ResNet-34, the tunnel length significantly reduces when going from CIFAR-10 to CIFAR-100 (from 20 to 29 layers). Thus, when the task became more difficult, more layers started getting used to learn better features. Similarly, for simple MLPs, we know that beyond a certain depth, depth does not help (this comes from many other studies, e.g., that analyze gradient properties, etc.: beyond a certain depth and without skip connections, adding more layers does not help due to vanishing gradients). The insight that “many later layers do not contribute significantly to accuracy” is also known and is precisely why “deep network pruning” literature is not able to prune later layers too significantly.
2. In the introduction section, the authors claim that “they challenge the commonly held intuition that deeper layers are responsible for capturing more complex and task-specific features”. I do not see any evidence that they changed this commonly held view. In fact, many of their experiments reinforce exactly the common viewpoint. Specifically, the authors show that later layers hurt the OOD performance. This indicates that the later layers got specialized towards the within-distribution task which is why they hurt the OOD task. Hence, if the commonly held view is being reinforced with the evidence provided by the authors, there is nothing particularly surprising about the findings.
3. I think the proposed work can have significant value in the field of continual learning and also multi-task/multi-modal learning if the observed insights can be used to guide novel architectural designs and/or loss functions. Unfortunately, the current work (despite a lot of hard work) only plays around with toy datasets in that problem space. If the authors can build further on the new insights and create new models/losses for the aforementioned areas, this can be an impactful work.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to thank the reviewer for their valuable feedback.
We acknowledge some presentation issues, which we discuss in the general answer. Please let us know if you find it satisfactory. We discuss the other issues below.
**Increasing depth does not help because representations are learned up to a certain layer and subsequent layers make them compact. (Weakness #1)**
We have thoroughly researched existing literature but have yet to find any work that describes this particular behavior. However, we are open to any references or sources you might be aware of in this context, and we would be grateful for any insights you can provide.
Furthermore, credible alternatives, without a tunnel, could be easily imagined. For example, considering the perspective of gradient flow, one would anticipate a contrary pattern. As the gradient norm diminishes when we move away from the output layer, one could conjecture that the upper layers should "learn more" than the bottom ones.
Last, the contribution of this work lies in conceptualizing the tunnel hypothesis underpinning several observations made in referenced literature and extensively examining its implications in transfer learning and continual learning settings.
**Lack of support for challenging the view that deep layers learn task-specific features. (Weakness #2)**
The scattered presentation could have harmed clarity and readability. Thus, we provide a concise summary of these results in one place:
1. The tunnel layers, which demonstrate a decrease in out-of-distribution (OOD) performance, do **not** contribute significantly to the in-distribution (ID) performance (Figure 1, 2, 7).
2. We want to emphasize that the tunnel layers remain unchanged when optimizing the network for a novel task (see Table 4). This transferability across tasks (without tunnel retraining) falsifies the task-specific view of the deeper layers.
3. Measured with CKA, representations at the tunnel's beginning and end are roughly the same. This suggests that no significant transformation was applied to them in the tunnel layers (Figure 4).
We are not aware of any such findings being comprehensively reported in the literature. Thus, our research provides a new, structured, and arguably valuable perspective. We'd be happy to engage in further discussion or amend our work with prior work if the reviewer offers any specific references.
**If the authors can build further on the new insights and create new models/losses for the aforementioned areas, this can be an impactful work. (Weakness #3)**
We agree this is an exciting area for further research. We have compiled a paragraph of recommendations based on new insights from the tunnel hypothesis (see below). We include these recommendations in the conclusions section.
> Our tunnel hypothesis has multiple consequences relevant to downstream tasks.
>
> In particular, focusing on the tunnel entry features is promising when dealing with distribution shift due to its strong performance with OOD data.
For continual learning, regularizing the extractor should be enough, as the tunnel part exhibits task-agnostic behavior. Skipping feature replays in deeper layers or opting for a compact model without a tunnel can combat forgetting and enhance knowledge retention.
>
>For efficient inference, excluding tunnel layers during prediction substantially cuts computation time while preserving model accuracy, offering a practical solution for resource-constrained situations.
**Indeed, if the task becomes more complex or more difficult, the tunnel length would reduce (since more layers would be spent trying to learn more complex features).**
This observation is indeed compelling and is further explored in section 3.3 (Table 3), where we confirm the intuition that training on more complex datasets results in shorter tunnels. However, the complexity of the dataset is understood here as the number of classes. Note that more complex datasets regarding the number of samples do not impact the tunnel's length (compare CIFAR-10 vs CINIC-10 in Table 1). Formalizing this relationship with more rigorous mathematical treatment is an exciting direction for future work.
**The insight that “many later layers do not contribute significantly to accuracy” is also known and is precisely why “deep network pruning” literature is not able to prune later layers too significantly.**
Your comment suggests a link between layer depth and pruning potential, but we're unsure how this aligns with our findings and whether we should reference any works that confirm those observations in our paper. We'd be happy to see a clarification of this remark and amend our submission accordingly.
We again thank the reviewer for raising important issues. We hope that our answers are satisfactory. If not, we'd be happy to provide more details. Otherwise, we'd appreciate if the reviewer reconsidered the final score of our submission.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the authors for the response. As I said in the original review, I do find their OOD related findings interesting. Sorry for not providing concrete references in my review.
Here are four references that may have made me think that the observations were not too surprising:
1. Pruning literature [Neurips 2019]: https://arxiv.org/pdf/1906.02773.pdf. See Fig. 1 and description in section 3.1 (global pruning paragraph) in this paper. It says that later layers can be pruned more. Hence, I thought that it was known knowledge that later layers have more redundant knowledge that can be removed. It is also known that if a network is too compact, it is hard to prune such networks (as seen by practical difficulty in pruning MobileNets, EfficientNets, etc.). Thus, if the task complexity is too much for the model at hand, many layers would play important role (and thus the tunnel would reduce). The tunnel hypothesis is essentially stating the same thing -- that the later layers do not contribute much new information if a task is relatively simple (thus a longer tunnel). If a task becomes more difficult, the tunnel reduces.
2. Deep Equilibrium Models (DEQ) [Neurips 2019]: https://arxiv.org/pdf/1909.01377.pdf. The whole motivation behind these networks was that after a certain layer, representation does not change much (i.e., they approach equilibrium). This can again be seen as an evidence towards representations not changing much after a certain number of layers. Please also check some other references on page 1 of the original DEQ paper above (refs [18,8,15]).
3. Feature visualization kinds of research [Distill 2017]: https://distill.pub/2017/feature-visualization. These are just some papers that talk in detail about how representations form (but from visualization perspective). This one may not have too much insight but it shows empirically how representations form.
4. How information content changes through the training: https://arxiv.org/pdf/1703.00810.pdf. This formally looks into how representations compress over the course of training using mutual information measures, etc. There was a lot of debate around this work, so I encourage authors to look at other follow up works also (e.g., ICLR 2018 https://openreview.net/forum?id=ry_WPG-A-).
On the "more complex tasks requiring shorter tunnels" section above: I was only saying that given that features form slowly through the network layers (edges to complex shapes, as shown in the above Distill 2017 visualization paper), I imagined that for a more complex task, the network would spend more layers trying to learn the representations (and thus the tunnel would be shorter). And if the task is easy, several layers would learn redundant info (and, hence, layer output would not change between such layers...which is clear from your CKA analysis). Moreover, given the pruning literature, I always assumed this redundancy was happening towards the end. Maybe it is just me...but these are some of the reasons I was not surprised.
If this paper had more theoretical results, that would make the case much stronger. For now, I will keep my rating. I will leave it up to other reviewers and AC in case I genuinely missed something.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: First, we’d like to say that we enjoy the discussion with the reviewer, even if we disagree. Thank you for raising relevant points.
We admit some deficiencies in presentation, particularly the emphasis on 'surprisingness,' which is now removed. We have amended the paper as outlined in the general answer 'scope and novelty' section. Now, we underline that the value of our work is *conceptualization, providing a coherent picture, and studying its implications*. We believe it is valuable, and for example, it constitutes the solid ground for building further insights, e.g., the mentioned OOD results. Please let us know if any more changes would be beneficial.
**Regarding the papers**, we thank you for suggesting them. The following paragraph will be added to the related work section:
> The analysis of representations across layers has been a focal point in many related studies. Visualization of layer representations indicates that higher layers capture intricate and meaningful features, often formed through combinations of lower-layer features~[7]. This phenomenon potentially accounts for the extension of feature extractors for complex tasks. Work [1] builds a theoretical picture that stacked sequence models tend to converge to a fixed state with infinite depth and proposes a method to compute the finite equivalent of such networks. The framework of [1] encompasses previous empirical findings of [2,3,4]. Independently, research on pruning methods has highlighted a greater neuron count in pruned final layers compared to initial layers [8], which is in line with the tunnel's existence. Furthermore, in [5,6], authors showed that training neural networks may lead to compressing information contained in consecutive hidden layers.
In more detail:
Referenced articles [2,3,4] show the near-state-of-the-art performance of weight tying in deep sequence models. We believe that these findings are orthogonal to our extractor tunnel hypothesis. Deep Equilibrium Models [1] extend these results and present a very interesting analysis, which states deep sequence models tend to converge to a steady state. The first significant difference is the assumption of homogeneity, i.e., the states are transformed iteratively using the same function instead of ‘different’ layers. Secondly, [1] is more on the theoretical side; we show empirically that the phenomena in question occur for ‘standard-depth’ neural nets. Thirdly, we show that the representations are not static in the tunnel when we probe them with the OOD tasks.
Regarding the Information Bottleneck concept [5], it is compatible with our observation. However, we do not see a direct way of deriving the extractor-tunnel transition and its relative sharpness. Moreover, [5] hinges heavily on the particular activation function, as shown in [6], and does not necessarily hold for the RELU activations used in our work. Finally, again [5] does not allow to derive the OOD behavior.
The visualizations presented in [7] nicely illustrate the extractor, although, in our opinion, they do not suffice to directly derive the existence of the tunnel nor the sharp transition.
We find the pruning results somewhat orthogonal. We would say that our results and [8] complement each other rather than can be derived from each other.
Last but not least, we empathize with the reviewer's sentiment that these works are mounting evidence for the extractor tunnel phenomenon. Nevertheless, in our view, pinpointing and naming it is of value to the community.
**If this paper had more theoretical results, that would make the case much stronger.**
We agree; we consider this work as a solid starting ground for future theoretical research. It has been added to the limitations and further work section.
[1] Bai, S., Kolter, J. Z., & Koltun, V. (2019). Deep equilibrium models. Advances in Neural Information Processing Systems, 32.
[2] Dabre, R., & Fujita, A. (2019, July). Recurrent stacking of layers for compact neural machine translation models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 6292-6299).
[3] Bai, S., Kolter, J. Z., & Koltun, V. (2018). Trellis networks for sequence modeling. arXiv preprint arXiv:1810.06682.
[4] Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., & Kaiser, Ł. (2018). Universal transformers. arXiv preprint arXiv:1807.03819.
[5] Shwartz-Ziv, R., & Tishby, N. (2017). Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810.
[6] Saxe, A. M., et al., (2019). On the information bottleneck theory of deep learning. Journal of Statistical Mechanics: Theory and Experiment, 2019(12), 124020.
[7] Olah, C., Mordvintsev, A., & Schubert, L. (2017). Feature visualization. Distill, 2(11), e7.
[8] Morcos, A., et al., (2019). One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers. Advances in neural information processing systems, 32. | Summary: This paper shows an effect of deep neural networks when trained for classification tasks — the initial layers create linearly separable features, and the later layers collapse the features for the final prediction. This phenomenon is explored with extensive experiments.
Strengths: - The paper explored a very interesting phenomenon of how the features are learned dynamically through layers.
- The paper performed extensive experiments to show how this "tunnel effect" affects the model performance under different settings.
- The experiments are well-designed, and the results are demonstrated well.
Weaknesses: - This is not the first/only paper that discovered some similar effects under a similar setting, therefore, a more comprehensive comparison with them in the related work section as well as a clarification on the contribution should be added. e.g. [1] examines the feature "intrinsic dimension", [2] analyzes the generalization effects of feature neural collapse on in-domain and out-of-domain data.
- The authors mentioned the network can be split into the extractor and the tunnel which compress the features, however, it can be difficult to systematically split the network, it seems the author also did not provide a systematically split of the network based on the numerical rank.
[1] Ansuini et. al. Intrinsic dimension of data representations in deep neural networks.
[2] Li et. al. Principled and Efficient Transfer Learning of Deep Models via Neural Collapse.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the tunnel effect, or the numerical rank provide practical guidance on which feature we should use for different problems, e.g. out-of-distribution data, continual learning?
- Following up on the previous question, if this tunnel effect can provide guidance, how can we use this tunnal effect while model training and/or model inference.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the paper is relatively thoroughly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
**Considering similar findings in other papers, enhance the related work section with comparisons and clarify contributions.**
We have added the references [1] and [2] to the related works section:
>Several recent works~\cite{ansuini2019intrinsic,li2022principled,rangamani2023feature} have extended the observation of NC and explored its impact on different layers, with a notable emphasis on deeper layers. \cite{li2022principled} establishes a link between collapsed features and transferability. In our experiments, we delve into tunnel creation, analyzing weight changes and model behavior in a continual learning scenario, revealing the task-agnostic nature of the tunnel layers.
**Systematically dividing the network is challenging, and the authors did not offer a numerical rank-based network split.**
We are still determining if we understand this concern correctly (if the following answer is off, please let us know). Our experiments observed relatively straightforward patterns, where the accuracy flattens (e.g.,> 90% of the final value) coincides with the point where the rank starts to drop. This gives a rather sharp boundary between the extractor and the tunnel.
**Can the tunnel effect, or the numerical rank provide practical guidance on which feature we should use for different problems, e.g. out-of-distribution data, continual learning?**
We have compiled a paragraph of recommendations based on new insights from the tunnel hypothesis (see below). We include these recommendations in the conclusions section.
> Our tunnel hypothesis has multiple consequences relevant to downstream tasks.
>
> In particular, focusing on the tunnel entry features is promising when dealing with distribution shift due to its strong performance with OOD data.
For continual learning, regularizing the extractor should be enough, as the tunnel part exhibits task-agnostic behavior. Skipping feature replays in deeper layers or opting for a compact model without a tunnel can combat forgetting and enhance knowledge retention.
>
>For efficient inference, excluding tunnel layers during prediction substantially cuts computation time while preserving model accuracy, offering a practical solution for resource-constrained situations.
**How can we use this tunnal effect while model training and/or model inference?**
One practical approach is to conduct model inference without the tunnel layers. As these layers do not contribute to the final performance. In continual learning, if tasks have the same class count, the tunnel remains unchanged; thus, not updating or removing the tunnel layers can be considered for new tasks.
If our responses have adequately addressed your concerns, we kindly request your support and considerating of improving your score. If you have any further concerns or additional points to raise, we are eager to address them. Your insights are valuable in enhancing the quality and impact of our research.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. I think the author successfully addressed my concerns and solved my questions. The paper is very interesting, therefore I will keep my original score and suggest accepting the paper.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for supporting our work and, again, for useful comments and suggestions. | Rebuttal 1:
Rebuttal: ## General Response
Dear reviewers,
Many thanks for providing valuable feedback in your reviews, both positive and negative. We are delighted to note that all the reviewers prized the scale of our experimentation and found our results very interesting (xLcw, dmw5), with xLcw reporting that they are 'well-designed' and EeeK agreeing on their thoroughness and organization. Moreover, dmw5, Eeek, and Kpac indicated that the presented analysis could have implications for downstream tasks like OOD or continual learning. The reviewers expressed interest in further pursuing this line of research in other modalities (Kpac).
### Scope and novelty
On the negative side, reviewers dmw5 and Kpac raised concerns regarding the novelty of our findings and, consequently, the contribution of our work. We consider the primary value of our work to be *conceptualization and providing a coherent perspective of how representations are formed*. In particular, we do not mean to claim that all observations are new. We instead attempt to build a standalone empirical evidence to support our tunnel hypothesis. Moreover, we acknowledge that we missed discussing more thoroughly [1,2,3], which will be added.
To be clear, we apply the following amendments to the paper:
- we add the papers (see above) to the related work section along with the discussion
- we update the motivation and conclusion paragraphs (see below)
- we change the first point of the contribution to "We *conceptualize* and extensively examine the tunnel effect, namely, deep networks naturally split into the extractor responsible for building."
**The revised paragraph in the introduction now reads as follows:**
> Extensive research has ..... to simplify representations with increasing depth [38,7]. This phenomenon arises because, despite their large capacity, these networks strive to compress hidden representations by focusing on discriminative patterns during supervised training [38, 7, 8, 30]. Motivated by these findings, we aim to investigate this phenomenon further and build a comprehensive picture to answer the following research question:
>
> "How do representations depend on the depth of a layer?"
**We add the following paragraph to the conclusions:**
> We emphasize that our motivation for investigating this (i.e., tunnel) phenomenon is to build a coherent picture encompassing our experiments and evidence in the literature. Specifically, we aim to understand better how the neural networks handle the representation-building process in the context of depth.
[1] Ansuini et. al. Intrinsic dimension of data representations in deep neural networks.
[2] Li et. al. Principled and Efficient Transfer Learning of Deep Models via Neural Collapse.
[3] Feature learning in deep classifiers through Intermediate Neural Collapse, Rangamani, Akshay; Lindegaard, Marius; Galanti, Tomer; Poggio, Tomaso.
Pdf: /pdf/b31a1bf063d902f73b4d140aae0f60925ad84718.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Stability of Random Forests and Coverage of Random-Forest Prediction Intervals | Accept (poster) | Summary: This work studies the random forests stablity for regression problem, and the authors presents theoretical analysis on the upper and lower boudns for the coverage probability of prediction intervals constructed from the out-of-bag error of random forests. The theoretical guarantee is based on a light-tail assumption of the marginal distribution of the squared response.
------------------after response---------------------------------
After reading the authors' response, I do not think the authors answer my concerns, in particularly for the novelties and significances.
As a theoreical work, it is very important to evaluate from theoretical novelties and techniques, while I find some inremental results based on well-known techniques.
I do not find any experiments, and I do not know why the authors could claim that "Our work applies to many variants of random forests ... which makes it particularly relevant in theory and practice".
Strengths: 1) It is an interesting problem on the theoretical understanding of random forests.
2) Some theoretical results on the convergence probability of prediction intervals constructed from the out-of-bag error of random forests.
3) Limited theoretical techinical contributions
Weaknesses: 1) The problem is not very clear. The authors should first present the studied problem, i.e. the original ranfom forests for regression, or randomf forest interval, or prediction intervals constructed from the out-of-bag error of random forests. It is very confused to understand the main contributions in the current submission and relevant work. For completeness, it would be better to present the detailed algorithm, rather than finding some other research work for other readers.
2) The main conclusions are not clear. The main contribution of this work is the stability of random forests in Theorem 1. Generally, a theoretical work concerns seriously the convergence rate of stability, and how tight of this rate. It would be better to present the specific expression for \nu_{n,B}, and make necessary discussions. What's factors affect the stability rate?
3) Some important definitions and notions are missing. For example, where is the definition of "light tail", whichi is the basic assumption in main theoretical results. How to characaterize light tail and its relevant factors.
4) The authros should clearify the novelty and significance of the main results, for example, how about the theoretical new insghts on the technical proof in this work. As an pure theoretical problem, it is importnat to present some new technical proof, rather than simple extension of the current techiniques. What's the siginificance of the main results, is it possible to present some pratical guidance and suggest some new algorithms.
5) The authors should have a good background on the theoretical analysis on random forests, for example,
G. Biau, L. Devroye, G. Lugosi, Consistency of random forests and other averaging classifiers, JMLR 2008.
M. Denil, D. Matheson, N. De Freitas, Narrowing the gap: random forests in theory and in practice, ICLM2014.
W. Gao, F. Xu and Z.-H. Zhou. Towards convergence rate analysis of random forests for classification. AIJ, 2022.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1) The problem is not very clear. The authors should first present the studied problem, i.e. the original ranfom forests for regression, or randomf forest interval, or prediction intervals constructed from the out-of-bag error of random forests. It is very confused to understand the main contributions in the current submission and relevant work. For completeness, it would be better to present the detailed algorithm, rather than finding some other research work for other readers.
2) The main conclusions are not clear. The main contribution of this work is the stability of random forests in Theorem 1. Generally, a theoretical work concerns seriously the convergence rate of stability, and how tight of this rate. It would be better to present the specific expression for \nu_{n,B}, and make necessary discussions. What's factors affect the stability rate?
3) Some important definitions and notions are missing. For example, where is the definition of "light tail", whichi is the basic assumption in main theoretical results. How to characaterize light tail and its relevant factors.
4) The authros should clearify the novelty and significance of the main results, for example, how about the theoretical new insghts on the technical proof in this work. As an pure theoretical problem, it is importnat to present some new technical proof, rather than simple extension of the current techiniques. What's the siginificance of the main results, is it possible to present some pratical guidance and suggest some new algorithms.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This is a pure theoretical work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for agreeing that "It is an interesting problem on the theoretical understanding of random forests." We will seriously take the reviewer's comments into consideration and make revisions accordingly. Below are point-by-point responses to the reviewer's "Weaknesses" comments.
1. We want to make it clear here that there are two major contributions of our work. The first is establishing the stability of random forests under mild conditions; the second is providing non-asymptotic coverage guarantees for random forest prediction intervals (RFPIs), which is based on the stability property. It is worth stressing that methods to construct RFPIs have been proposed in Refs. [14] and [32]. However, in Ref. [14], there is no theoretical guarantee for the coverage probability; in Ref. [32], the coverage probability is established under very strong assumptions and it is an asymptotic result. It is NOT our aim to provide a completely new way to construct RFPIs. Rather, we provide further theoretical justifications to existing methods under much weaker assumptions. To the best of our knowledge, our non-asymptotic guarantees (Theorems 8 and 9) are the first such results for the practical version of the random forest algorithm. We hope this clarifies the main points of our work. In the revised manuscript, we will clarify the goals of our work and add a complete description of the RFPI algorithm for which our theory provides coverage guarantees.
2. The stability we are considering is also referred to as "out-of-sample stability" in Ref. [3]. The parameters $(\varepsilon,\nu)$ characterize a "spectrum" of how stable an algorithm is (per Professor Barber's recent talk on stability of black-box algorithms), and in general there is no requirement that $(\varepsilon,\nu)$ must converge to 0 for a given algorithm. That said, a vanishing pair of $(\varepsilon,\nu)$ is practically appealing. We show in Theorem 7 that such parameters depend on the data distribution, number of data points $n$, and number of trees $B$ in the random forest algorithm. For non-heavy-tailed distributions, letting $n$ and $B$ go to infinity can result in vanishing $(\varepsilon_{n,B},\nu_{n,B})$. In the revised manuscript, we will present the "specific expression for $\nu_{n,B}$, and make necessary discussions," as suggested by the reviewer, which was partially addressed in the Appendix (C.4).
The tightness of the convergence rate is relevant. An inspection of our proof shows that there is some freedom in choosing the dependence of $B$, as well as of $\lambda$, on $n$ to have vanishing $(\varepsilon_{n,B},\nu_{n,B})$, and the optimal rate depends on the data distribution (through $\mathbb{E}[Z_{(n)}^2]$). However, $\mathbb{E}[Z_{(n)}^2]$ originates from the bound for $|\mathsf{rf}-\mathsf{rf}^{\backslash i}|$, and it is unclear whether this is the tightest bound in general, as it only takes advantage of the range of the predicted values of tree predictors. Whether we can find a more informative bound for $|\mathsf{rf}-\mathsf{rf}^{\backslash i}|$ is an open problem, and only after this is resolved can we discuss the optimal rate of convergence for $(\varepsilon_{n,B},\nu_{n,B})$. To find such an improved bound is one of our future research directions, as stated in Sec. 3.4.
3. By light-tail we mean non-heavy-tail. Please note that "light tail" appears in the informal Theorem 1 and other qualitative discussions, and when formally presenting our stability result with vanishing $(\varepsilon_{n,B},\nu_{n,B})$ in Corollary 3, we technically use sub-gamma distributions, which are well defined. We understand that "light tail" could be confusing for some readers, and we will make revisions accordingly.
4. The reviewer mainly addresses two points here: "new technical proof" and "new algorithm." For the first point, we want to emphasize that our aim in this work is not to provide new techniques. Rather, we use recently established techniques (stability by bagging) and standard concentration inequalities to provide new understanding of the random forest algorithm used in practice. We show that the practical version of random forests is stable, and the previously proposed methods to construct prediction intervals are almost provably valid approaches, both under mild conditions. These are the two major contributions of this work. (Corollary 4 is also a new result for the jackknife-with-stability method.) We don't believe that the techniques in our proofs are "simple extensions of current techniques." There are at least two key developments: first, we notice the range of tree predictors is bounded conditioned on training data, which does not hold for a general learning algorithm; second, we take into account that $B_i$, the number of times the $i$th data point is not included in a bootstrap sample, is random, rather than fixed. We also provide the complete proof of an important result, Lemma 5, in the Appendix, and we establish the upper bound result by noticing the symmetry in the probabilistic deviation bound in the Appendix (E). None of this work is trivial. For the second point, again, it is not our aim in this work. However, our stability results do inspire new algorithms in other topics. Some of our ongoing work uses the stability property of random forests to perform active learning, which will be reported elsewhere.
5. We will cite the suggested papers. Please note that although there are many theoretical results for random forests, few apply to the greedy random forest used in practice. Our work applies to many variants of random forests, including the greedy one, which makes it particularly relevant in theory and practice.
We hope after we fully address the comments, the reviewer will consider our work important and raise the score. | Summary: Random forests are one of the most used Machine Learning methods. Its standard variant (for regression) takes the following form. GIven a random sample $D=(X_i,Y_i)_{i\leq n}$ of covariate/response pairs, one takes $B$ bootsrapped samples from $D$ and trains a tree regressor on each boostrapped sample (using one of many possible criteria). The estimated regression function $RF(x)$ is the average of the $B$ tree regressors on $x$.
Theorem 1 of the present paper is a kind of stability property for $RF$ vis a vis its "out-of-bag" variant $RF^{\backslash i}$. Here $i\in [n]$ and $RF^{\backslash i}$ is the version of $RF$ where bootstrapped samples not containing the $i$-th sample point are discarded. The paper then shows that one can use this stability property to build predictive intervals from the "OOB residuals" $|Y_i-RF^{\backslash i}(X_i)|$. Theorem 2 proves that these intervals, after a slight enlargement, provide good coverage at near-nominal-levels, whereas Theorem 3 gives exact $1-\alpha$ coverage in the limit, under suitable assumptions.
One of the main assumptions the authors impose for these last results is that the response variables be light-tailed. However, a small set of experiments suggests that similar properties would hold even for very heavy tailed covariates.
The present paper joins string of recent works dealing with uncertainty quantification for ML methods without resorting to a calibration sample (in which case one could use conformal prediction). The authors argue that main distinguishing feature of Theorems 2 and 3 is that their predictive intervals are much less demanding computationally. As computing the OOB predictors comes as a byproduct of the RF computations. By contrast, related work (eg. on jacknife type estimators) require the regression method to be rerun several times on leave-one-out samples.
Let me also say a few words about proofs. The idea is to first explore the $B\to +\infty$ limit of $RF$ and $RF^{\backslash i}$ and prove stability in that setting, via arguments from arXiv:2301.12600.
Strengths: As noted above, the paper obtains a lightweight method to compute (nearly-)valid prediction intervals from random forests, which are often used in practice. This means that the result is significant (though see the next field). It also seems to be original. The exposition is fairly clear.
Weaknesses: * Mathematically, it seems that much of the work behind the paper comes from arXiv:2301.12600 by Soloff, Barber and Willett.
* The nonasymptotic Theorem 2 is a bit unsatisfying in that both the interval length and the coverage are subject to errors that are hard to quantify in practice (still, since people will use RF anyway, it's nice to have some result of this kind).
* The results do not give a good bound on how large $B$ needs to be in order for the method to work well. This is clearly related not just to the maximum $Z_{(n)}$, but to how small an error one wants to allow when defining the interval.
* One very minor comment: Theorem 3.4 from arXiv:2301.12600 (quoted in line 163) should be Theorem 9 (the numbering seems to have changed from v1).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1) It seems from reading arXiv:2301.12600 that $Z_{(n)}$ could be replaced with the range $\max_{1\leq i<j\leq n}|Y_i-Y_j|$ throughout the analysis. Is this true? One advantage of the range is that it is centered.
2) The authors obtain what is called a "marginal coverage" guarantee: if one single extra point $(X,Y)$ is given, then everything work. A stronger guarantee would be this: letting $\mu$ denote the distribution of $(X,Y)$
$$P\{\mu\{(x,y): |y-RF(x)|\leq q_\alpha(R_i) + \epsilon\}\geq 1-\alpha-\nu.$$
Could this stronger guarantee be obtained via the present methods.
3) As noted in a previous field, computing how large $B$ needs to be seems to be a pressing problem. Checking the bound in Theorem 12 in arXiv:2301.12600 (v2), it seems that the present paper skips the variance-based proof for the variability of $RF_B$ and goes straight to the range-based bound. This means that the parameter $\epsilon_{n,B}$ in Theorem 2 is a sum of two parts: one that would be there even if $B=+\infty$, and a second term that comes from finite $B$. It seems to me, however, that this second term could be controlled via suitable empirical concentration inequalities such as Lemma E.4 in https://arxiv.org/pdf/2212.09900.pdf: in practice, this would mean that the second error term just mentioned could be controlled for finite B. This, in turn would make the method a bit heavier, but more quantifiable. Could the authors comment on this?
4) Regarding Theorems 3 and 10: it would seem that a sufficient condition for the theorem to hold is that
$Y-\mathbb{E}[Y\mid X]=:\eta$ satisfies:
$$\lim_{h\searrow 0}\mathbb{E}[\sup_{x,y\in\mathbb{R}\,:\,|x-y|\leq h}|F_{\eta\mid X}(x)-F_{\eta\mid X}(y)|]=0.$$
where $F_{\eta\mid X}$ is the conditional cdf of $\eta$ given $X$. This kind of condition is very natural: eg. it holds if the density of $\eta\mid X$ is uniformly bounded. Would it indeed be sufficient to require this condition on $\eta$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: There is no explicit discussion of limitations, nor do I think one would be necessary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments on our work. In particular, we are happy that the reviewer considers that "the result is significant" and "original," obtaining "a lightweight method to compute (nearly-)valid prediction intervals from random forests, which are often used in practice." The most important message in our work is indeed that random forests can be used to construct justified prediction intervals at almost no extra computational cost. Below are our point-by-point responses to the reviewer's comments in "Weaknesses" (W) and "Questions" (Q).
1. (W1) We agree that Ref. [25] (arXiv:2301.12600) serves as an important foundation of this work. However, there are two key developments in this work to achieve the random forest stability. First, we consider regression problems where the base tree learner does not output unconditionally bounded predictive values. This is tackled by introducing the conditional stability property first and then averaging over the data distribution. As such, we arrive at the out-of-sample stability as discussed in Ref. [3], which is different from the absolute stability (Definition 4 in the updated version of Ref. [25]). Definition 11 in the updated version of Ref. [25] is like our definition of conditional stability where a data-dependent range is involved. Second, in our work, each $B_i$, the number of times the $i$th data point is not included in a bootstrap sample, is random, while in Ref. [25], it is fixed to be a constant. Our setting is technically trickier, as the lack of a universal constant lower bound for $B_i$ makes the analysis more difficult and results in a slower convergence rate for the $|\mathsf{rf}^{\backslash i}-\mathsf{RF}^{\backslash i}|$ term.
2. (W2) We also agree that the stability parameters can be hard to quantify in practice. Nonetheless, the most important point of our work is to qualitatively prove that the practical version of random forests is stable under mild conditions, and we hope our work will inspire more research in this direction. We thank the reviewer for considering that "it's nice to have some result of this kind."
3. (W3) We thank the reviewer for this insightful question. On one hand, the current framework of analysis involves many factors: the data distribution, the dependence of $\varepsilon_i$ and $\lambda$ on $n$, etc. For such relatively simple situations as $Y$ is bounded, from Corollary 5 in the Appendix, we can conclude that $B$ does not need to be extremely large. A sublinear dependence on $n$ suffices. However, there are six terms in $\varepsilon_{n,B}$ and $\nu_{n,B}$, and there is some freedom in choosing each term's convergence rate, which may also depend on factors other than $B$, such as the dependence of $\varepsilon_1$ on $n$. Rather than look for some "optimal" convergence rate, we focus on presenting the important qualitative result that $\varepsilon_{n,B}$ and $\nu_{n,B}$ can converge to 0 under mild conditions. On the other hand, the current framework of analysis itself is not necessarily the "optimal" one. For example, we have required $\varepsilon_1=\varepsilon_3$ in our proof for simplicity, but doing so removes a degree of freedom. Moreover, all the analysis in this work is built on the upper bound $2Z_{(n)}$ for the difference between $\mathsf{rf}$ and $\mathsf{rf}^{\backslash i}$. This bound could be crude for a typical dataset, as hinted by our numerical results in Sec. 3.4. So, at this stage, we focus on the qualitative aspect of our work, leaving the quantitative improvements as future study.
4. (W4) Thanks for pointing out this change in the updated version of Ref. [25]. We will revise our manuscript accordingly.
1. (Q1) Yes. $\max_{i<j}|Y_i-Y_j| \leq 2Z_{(n)}$, both serving as a measure of the spread of $Y$. We think your statement that "One advantage of the range is that it is centered" refers to that $Y_i-Y_j$ is centered. Please let us know if this is not what you mean. The main reason we use $Z_{(n)}$ is that in some cases, the scaling or even the exact expression of $\mathbb{E}[Z_{(n)}^2]$ is handy to use, thus simplifying the analysis.
2. (Q3) We are not completely sure about the LaTeX expression, which seems to address the coverage probability conditioned on the training set. If so, then this is a very insightful question with strong practical relevance. Our theory basically follows the idea of conformal prediction, and thus cannot provide guarantees conditional on the training set. We note there are several recent works that try to generalize the marginal coverage to various kinds of conditional coverages such as arXiv:2305.12616. We hope in the future some conditional coverage of random forest prediction intervals can be established.
3. (Q3) Thanks very much for raising the point of using a potentially sharper concentration inequality in our analysis. We believe the situation is like the choice between variance tensorization (Theorem 2.3 in Ramon van Handel's notes "Probability in High Dimension") and bounded difference inequalities (Corollary 2.4, ibid.). Our choice of using $2Z_{(n)}$ as the bound corresponds to the latter, which is technically more tractable when a precise calculation of the variance is not easy. If there is a way to calculate the variance, of course a better bound can be established. (But at this moment, we are unaware of any such results.)
4. (Q4) The asymptotic coverage that we establish concerns the continuity property of the CDF of the residual $|Y-\mathsf{RF}(X)|$, and there is no requirement for the CDF of $Y-\mathbb{E}[Y|X]$. As $Y-\mathsf{RF}(X)$ does not necessarily converge to $Y-\mathbb{E}[Y|X]$ in probability, the CDF of the latter is not relevant.
We greatly thank the reviewer for a careful reading of the technical details, useful suggestions, and many insightful questions.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you very much for the rebuttal. I have nothing else to add at this point. | Summary: In this paper the authors considers the issue of stability of the often used in practice Random Forest algorithm and provide theoretical bounds on the $\varepsilon$-stability upto an order of $O_{\mathbb{P}}(|Y|^2_{(n)}/n)$ (i.e. the largest in magnitude observation) when fitting the method with $n$-iid sample points $(X_i,Y_i)$. A light tailed assumption on $Y^2$ thereby yields suitable control over the asymptotic behavior of $|Y|^2_{(n)}/n$. Further, the stability results are used to derive $n$-dependent lower and upper bounds (under increasingly more assumptions) for the coverage probability of prediction intervals constructed from out-of-bag error of random forests. In comparison to many other results in the literature, this paper works with a practical version of random forests.
Strengths: 1. In comparison to many other results in the literature, this paper works with a more practical version of random forests.
Weaknesses: 1. It was not clear whether the stability bounds were optimal or could be improved.
2. The presentation usually benefits from working under a single set of assumptions instead of increasingly more assumptions (however this is a minor point).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: If one assumes that $Y=f(X)+\eta$ for $\eta \sim F$ for some suitable light tailed distribution and $f\in \mathcal{F}$ in some classical function class, can one have modified rates for stability bounds depending on $F$ and $\mathcal{F}$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None noted
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for considering that "this paper works with a more practical version of random forests." We also believe the most important point of our results is that they apply to the practical version of random forests, and are thus strongly relevant to applied machine learning. We theoretically prove in this work that random forests can be used to provide justified prediction intervals at almost no extra computational cost, which is appealing particularly for tasks where uncertainty quantification is desired and the computing resource is limited. Below are point-by-point responses to the reviewer's "Weaknesses" and "Questions" comments.
1. "It was not clear whether the stability bounds were optimal or could be improved." We agree with the reviewer on this point. On one hand, our bound is valid for a training set that contains at least two data points, and it seems that there is not much room for further improvement in the most general situation. On the other hand, for a dataset of typical size in practice, and for non-pathological data generating distributions, our numerical experiment (Sec. 3.4 of the manuscript) seems to suggest that the current bound could be improved. One of our research goals is to find a more informative bound that holds at least for TYPICAL cases, while the current bound we use holds for the WORST case. By providing the first stability result for random forests, we hope our work can draw the attention of more researchers to work towards a more satisfactory bound.
2. "The presentation usually benefits from working under a single set of assumptions instead of increasingly more assumptions (however this is a minor point)." We agree that stating all assumptions first makes the presentation of results clearer. Our structure basically follows previous works such as Ref. [17]. We start with the stability result (Theorem 7) and the coverage lower bound (Theorem 8), which hold under minimal assumptions and already carry sufficient useful information to guide machine learning practice. In many works on distribution-free prediction, such as Refs. [3], [15], the aim is to establish the lower bound. Statisticians such as the authors of Ref. [17] also pay much attention to the upper bound, which cannot be obtained without more assumptions. Such assumptions are mild, and can usually be satisfied by typical data distributions. Hence we also include this in our work (Theorem 9). Lastly, in order to compare with a previous asymptotic result on the coverage of random-forest prediction intervals in Ref. [32], we also derive a similar one (Theorem 10). This part is somewhat independent from the stability result and non-asymptotic lower and upper bounds. The assumptions made in Theorem 10 are also less intuitive than those made in Theorems 7-9. To highlight the most important contributions, we decided to state the assumptions in the present way. We thank the reviewer and hope our structure can be considered as reasonable.
3. If the true model is known, "can one have modified rates for stability bounds?" Thanks for raising this insightful question. While our theory provides a sufficient condition for stability of random forests, we do not expect its optimality in all cases. If we do know the true model, then we might perform some other analysis by fully taking advantage of the information about the data distribution and the function class. As long as one is able to come up with a more informative bound for $|\mathsf{rf}-\mathsf{rf}^{\backslash i}|$, the convergence rate can be modified. In the present work, our focus is the model-agnostic case. Again, our work serves as an important initiation point towards research in random forest stability, and we leave the quantitative improvement of the present work as future study.
We hope we have fully addressed the reviewer's comments, and have made clearer the qualitative importance and practical relevance of our work. | Summary: The paper presents new and strong set of results on stability of (greedy version of) random forests. Theoretical (resp. numerical) evidence is provided to support stability for light-tail (and heavy-tail) assumptions on marginal distribution of squared response. New finite sample upper and lower bounds are provided for prediction intervals constructed from OOB effort of random forests. The paper can be regarded as a demonstrative work that justifies the merit of random forests for both point and interval prediction.
Strengths: The style of results and overall content of paper are very appealing. I like the way in which stability and prediction interval results were informally stated first and then rigorously discussed later. A clear review of algorithmic stability concepts helped me understand the proof ideas.
Weaknesses: -- I think the writing can be slightly improved by articulating the use of absolute stability results for bagged algorithms.
-- The transition from derandomized version of RF to finite-B case through the route of conditional stability analysis is a bit abrupt and needs more explanation, particularly with regard to concentration of measure.
-- The limitation/extension of theory to heavy-tailed case is interesting but some comments about the key bottlenecks in the proof technique that would need to be overcome to achieve such a generalization would make the contribution stronger. Also where does the theory break? The experimental results are not very clear in this regard.
-- On the prediction interval part, Table 1 can be enhanced and repositioned to include computational advantages as well. Finally, the discussion around “jackknife-after bootstrap-with-stability (JaBS)” is a bit hard to follow since the discussion on J+aB and J+aBS is intertwined with the intuition on how subsequent results in the paper build on stability results – it is desirable to streamline this discussion a bit.
-- please consider citing and discussing relevant work in OR on optimal classification/prescriptive trees by Bertsimas and co-authors as well as their work on Stable Classification. Your approach and focus is different, but given the stated goal of putting the stability of random forests on a stronger footing, it may be worthwhile to make a connection.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please see my suggestions above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Everything on this front seems reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We gratefully thank the reviewer for the positive comments of our work, and the suggestion of a “Strong Accept.” The reviewer states that “the paper can be regarded as a demonstrative work that justifies the merit of random forests for both point and interval prediction.” This statement contains exactly the message we want to convey to the machine learning community: for typical tabular datasets, random forests can provide justified interval prediction at almost no extra computational cost, in addition to good point prediction. We believe our theoretical results in this work can have a strong impact on machine learning practice in various fields. In the revised manuscript, we will fully address the points raised by the reviewer to further improve our work. Below are our point-by-point responses to the reviewer’s “Weaknesses” comments.
1. “Articulating the use of absolute stability results for bagged algorithms.” This kind of stability is addressed in Ref. [25] and serves as an important foundation of our work. In this case, stability can be established with the only requirement that the base learner outputs bounded predicted values, and the data distribution is irrelevant. This can be appealing in certain tasks such as 2-class classification, where $Y$ itself is naturally bounded. One can use logistic regression, the tree predictor, or any other learning algorithm with bounded output as the base learner, and by aggregating results obtained on bootstrapped samples, a stable predictor can be achieved, in the sense that arbitrarily removing a single training data point will not much influence its prediction on a future data point. (It is worth stressing that in general regression tasks, $Y$ is unbounded, so requiring base learners to output unconditionally bounded predictive values can be a problem. However, inspired by the underlying idea of absolute stability, we in this work address this problem by first considering conditional stability and then averaging over the data distribution.)
2. "The transition from derandomized version of RF to finite-B case...needs more explanation." The derandomized version can be effectively seen as the $B=\infty$ case. That is, each bootstrap sample is drawn with equal probability an infinite number of times, resulting in the population mean $\mathsf{rf}(x)$ of tree predictors for any future $x$. In practice, $B$ is finite, and each bootstrap sample is drawn with equal probability a finite number of times, resulting in the sample mean $\mathsf{RF}(x)$ of tree predictors. The difference between $\mathsf{rf}(x)$ and $\mathsf{RF}(x)$, when conditioned on training data, can be quantified by Hoeffding's inequality, a standard concentration inequality. This approach was also used in previous works, c.f. Refs. [15], [25].
3. The issue of heavy-tail distributions and bottlenecks of our theory. Our proof mainly relies on the upper bound $2Z_{(n)}$ of the difference between $\mathsf{rf}$ and $\mathsf{rf}^{\backslash i}$ when conditioned on training data. It is a valid bound for many variants of random forests and for any training set with size no less than two. However, it is unclear whether this bound is tight in general. Developing our results on a possibly crude bound leads to the conclusion that the random forest is stable when $Y$ is not heavy tailed. Please note that this is a SUFFICIENT, rather than necessary, condition for random forest stability. Numerically, we found that even for heavy-tailed distributions, the stability seemed to persist. This indicates that there might exist a more informative bound, at least for non-pathological data generating distributions and for datasets of typical size. Once such an informative bound is established, our theory can be modified to more thoroughly explain the experimental result. Our work serves as an important initiation point for research in this direction.
4. "Table 1 can be enhanced and repositioned to include computational advantages." We will revise the manuscript accordingly. Also, with an extra one page for accepted papers, we expect to be able to provide the information of guaranteed coverages of those algorithms as well.
5. The discussion around JaBS is hard to follow. We are sorry for the unclarity. The length limit forced us to write it this way. We will streamline this discussion in the revised manuscript as suggested.
6. Bertsimas and co-authors's work. Thanks for introducing this line of work to us. It is not only relevant, but also inspiring. For example, combining the JMLR paper "Stable Classification" and the "stability by bagging" idea seems to immediately suggest the method of "importance aggregating." That is, we aggregate the best-performing base learners with more weights, which are trained on some "good" bootstrap samples. This can be a potential topic. We will discuss and cite the relevant papers.
We hope we have fully addressed the points raised by the reviewer, and we appreciate the reviewer's valuable suggestions.
---
Rebuttal Comment 1.1:
Comment: Thanks for responding to my review. No more clarifications sought from my side. | Rebuttal 1:
Rebuttal: We want to thank all reviewers for their time and comments.
We are encouraged that three of them give positive evaluations to our work and suggest to accept our manuscript with ratings 6, 7, and 8, respectively. All three reviewers agree that our theoretical work is relevant for applied machine learning. Actually, this is the most important message we want to convey in this work. We prove that, under mild conditions, the practical version of random forests can be used to construct justified prediction intervals at almost no extra computational cost. Given the well-known fact that random forests have good performance on point prediction, our result "can be regarded as a demonstrative work that justifies the merit of random forests for both point and interval prediction," as per Reviewer 1UHc. We appreciate this reviewer's summary of our work, and we are happy to share it with more machine learning researchers in various fields. Following Reviewer BF9X’s suggestions, we will further clarify our problem, conclusions, the definitions of some terms, and the novelty and significance. We will try our best to make the revised manuscript more accessible to a broader readership.
Each reviewer's comments are addressed separately below. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning DAGs from Data with Few Root Causes | Accept (poster) | Summary: The paper studies a new causal discovery method, in which it assumes that the DAG data is produced by few data-generating events whose effect percolates through the DAG. They propose a simple but effective method to learn the true DAG based on the few roots assumption. The proposed method outperforms baselines in various settings.
Strengths: 1. The few roots assumption is reasonable. And the paper motivates it well.
2. The proposed solution is simple and effective.
3. The paper conducts experiments on both synthetic and real-world datasets, indicating the effectiveness of the proposed method.
Weaknesses: 1. It is not clear why the objective function Eq.(10) contains noise. It needs more clear explanation and derivation here.
2. I wonder whether the proposed method could find the root nodes at the same time rather than just learning the DAG.
3. The proposed method could not achieve the best results on real-world datasets. Hence, I doubt the few roots assumptions satisfy the real-world scenarios. BTW, it is better to say the network is a protein network rather than a gene network.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please reply to the questions in the weaknesses. I am willing to raise the score if all the concerns are solved.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to reviewer Aevs
## Weaknesses
**Optimization robustness to noise.** The optimization problem doesn't contain noise explicitly. It is the convex $L^1$ relaxation of the noise-free version of the optimization problem. Doing this relaxation allows some robustness to (low magnitude) noise as we later confirm in our experiments (Table 1, row 7).
The objective in Eq. (10) only contains noise implicitly, after rewriting it with Eq. (4).
If we have found the true adjacency matrix $\mathbf{A}$, the objective becomes
$$\left\|\left\|\mathbf{X}\left(\mathbf{I} + \overline{\mathbf{A}}\right)^{-1}\right\|\right\| =\left\|\left\|\mathbf{C} + \mathbf{N}_c + \mathbf{N}_x\left(\mathbf{I} + \overline{\mathbf{A}}\right)^{-1}\right\|\right\| = \left\|\left\|\mathbf{C} + \mathbf{N}_c + \mathbf{N}_x\left(\mathbf{I} -\mathbf{A}\right)\right\|\right\|$$
This means that we approximate the root causes up to noise (also in next answer).
We will explain better.
**Learning the root nodes.** Yes, we can do this in the following way. If we recover the true adjacency matrix $\mathbf{A}$ via optimization Eq. (10),
we may compute an approximation $\widehat{\mathbf{C}}$ of the root causes $\mathbf{C}$ up to noise, by solving Eq. (4): $$\widehat{\mathbf{C}}= \mathbf{C} + \mathbf{N}_c + \mathbf{N}_x\left(\mathbf{I} -\mathbf{A}\right) = \mathbf{X}\left(\mathbf{I} + \overline{\mathbf{A}}\right)^{-1}.$$
This obviously requires a very good (weighted) estimate of the original adjacency matrix. We evaluate the top-performing methods on the recovery of the root causes (and the associated values) with an additional experiment in Fig. 3 of the attached pdf.
**Real-world dataset performance.** In Fig. 4 of the attached pdf we include an experiment that illustrates the sparsity in the root causes for the particular dataset [Sachs et. al., 2005]. Please see our general reply for further explanation and also our success in a causal discovery competition with real-world data.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks to the authors for the rebuttal. The additional results have solved my concerns. I hope the authors could include the new results in the final version. I would like to raise my score to reflect the changes. | Summary: This paper considers a new setting of linear DAG learning problem. Based on a linear transform of linear SEM, authors propose to study a new setting where there are few "root causes", with potential measurement noise in the data. Identifiability is proved and the true DAG is shown to be the global minimizer of the L0-norm of the vector of "root causes", under a specific distribution on the "root cause" variables.
Strengths: - a new setting for the linear DAG learning problem
- useful identification result (Thm. 3.2) with a complete proof
Weaknesses: - the new setting and its motivating example are not sufficiently convincing.
- authors only consider specific distribution on the "root causes" variables, making theoretic result somewhat limited
- some results are trivial from the literature (e.g., Thm 2.1)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - My first concern is about the new setting of learning linear DAGs; it is not clear whether the new setting is indeed meaningful in practice. In the pollution model example, it is stated that "the relevant DAG data is triggered by sparse events on the input size and not by random noise ", and "We assume a DAG describing a river network. The acyclicity is guaranteed since flows only occur downstream. .. We assume that the cities can pollute the rivers." In this example, why do we need to learn DAGs? The graph structure can be more accurately obtained by getting the information of flows. As such, I suggest authors give more practical examples in the context of DAG learning, to make the new setting indeed meaningful.
- root causes: in the DAG learning literature, "root causes" generally refer to the source nodes of DAGs. Not sure if it is suitable to use a (somewhat) conventional name to refer to something new in the same context.
* Theorem 2.1 is not new and may be not stated as a theorem.
* Regarding Thm 3.1: similarly, the result simple follows from the LiNGAM result, by assuming a specific distribution on the "root causes", so maybe consider put it as a lemma or proposition. Besides, in the experiments in the supplementary material, I can see LiNGAM failed. Can you explain why? After all, the linear SEM falls exactly into the setting of LiNGAM if there no measurement noise.
* after Eq. 8, "Among all possible DAG matrices, the solution of the optimization problem (8) is the one that minimizes the number of the root causes X": can you give more details about this claim?
* This may be a bit picky, but only sparse graphs (with edge/node=2 and 3) are considered. Please try other degrees of graphs. (But this is not very important and may be added after the rebuttal.)
* please use \cite, \citet properly; e.g., line 131 line 181-182
Overall, I like the new setting of learning linear DAGs, but every new setting should be validated with more examples/details. I look forward to author response.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Authors discussed a limitation that the proposed method only works for few root causes in the paper. To me, another important limitation is the specific distribution assumption on the "root causes", as the proof heavily depends on this assumption.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to reviewer LucQ
## Weaknesses
**Motivation for root causes.** Please see our general reply for a better motivation and also our success in a causal discovery competition with real-world data.
**Assumptions on the root causes.** Yes, within the scope of sparse root causes we only consider the special case that the support of the root causes is a multivariate Bernoulli variable (since the values are either zero or nonzero) and we should mention it in the limitations. We considered it a reasonable assumption to execute our ideas. Identifiability will hold for larger classes of distributions but we did not run experiments.
**Theorem 2.1.** We will call it a lemma. It is indeed an easy consequence of basic linear algebra (and thus not novel of course) but usually not explicitly stated in papers on linear SEMs. We do so here since this input/output view of a linear SEM is fundamental to our work.
## Questions
**Meaningfulness in practice.** Please see our general reply for a better motivation and also our success in a causal discovery competition with real-world data.
**Term: Root causes.** Source nodes are sometimes called root nodes but the term root causes seems not common. In any event, we will better clarify better its use in our work to avoid ambiguity.
**Theorem 2.1.** See reply above.
**Theorem 3.1 and LiNGAM's performance.** Yes, Theorem 3.1. is a consequence of the input being non-Gaussian, which we also state. Better to call it a corollary to the LiNGAM result than a lemma.
The subpar performance of LinGAM is an interesting question and a possible reasoning can be found in [Shimizu et.al, 2006], Section 5. Theoretically, the LiNGAM algorithm is guaranteed to find the true DAG. However, in practice, there exist small estimation errors. Moreover, exhaustive search over all possible permutations to compute the true causal order is infeasible for a large number of nodes. Thus, it turns out that the algorithm approximates with zeros the smallest elements and then computes the corresponding permutation to make it upper triangular. This makes the algorithm approximate and it is unclear whether it is guaranteed to find the true matrix in practice. For example, as we see in our experiments, it can fail.
**Optimization Objective, after Eq. (8).**
For a DAG matrix $\widehat{\mathbf{A}}$ the quantity $\mathbf{X}\left(\mathbf{I} + \overline{\widehat{\mathbf{A}}}\right)^{-1}$ is equal to the root causes $\widehat{\mathbf{C}}$ that would generate the data $\mathbf{X}$ via equation (7). Therefore, the optimization objective in (8) minimizes the overall number of root causes (number of nonzero entries of $\mathbf{C}$) for the entire data matrix $\mathbf{X}$. We will write this better.
**Edge density.** We chose to follow the experimental settings of prior work, which mostly generates sparse DAGs. For completeness, we conducted an experiment with varying average degree included it in the attached pdf, Fig. 2. For degree up to roughly half the maximal possible degree, we still perform best.
**Citations.** Thanks, we will fix it.
## Limitations
**Root causes distribution assumption.** Yes, we will mention in the limitations that we only considered multivariate Bernoulli. Identifiability will generalize to other distributions (due to the non-Gaussianity needed in Theorem 3.1) but we did not run any experiments with any. We did mention in the limitations that fixed support is not supported (and it seems also not identifiable).
---
Rebuttal Comment 1.1:
Title: Thanks for response
Comment: My major concern has been mostly resolved. I believe that authors can make the setting more practically convincing in future revision.
An additional suggestion is to add a dicussion regarding LiNGAM's performance, as the thoery part depends on LiNGAM's. From my point of view, it is even better to have a more through investigation, e.g., some extra experiments.
I will increase my score accordingly. | Summary: This paper considers learning of linear SEMs (weight matrix) under a data generation process that differs from the common formulation. It is assumed that each sample is generated from only few number of non-zero noise variables in which the set of noise variables is stochastic. The main theoretical result is identifiability of the weight matrix via an L0 norm objective. A relaxation of the objective with an L1 norm loss is used for extensive synthetic experiments and shown to be effective for learning the weight matrix under the proposed data generation mechanism.
Strengths: - Sparse root causes are an interesting concept to explore.
- Experiments on synthetic data demonstrate strong performance. Specifically, the proposed algorithm is fast and seems to scale up well, and when the assumptions are violated slightly, the algorithm still yields reasonable results.
Weaknesses:
- While the sparse root causes assumption is interesting, it is not well-motivated. For the pollution example, if I understand correctly, the top left and bottom left nodes do not really play “causal” roles in the sense that they are deterministic mediators in the system, and the whole causal system can be represented without using those nodes. In this sense, “few root nodes” become the effective causal nodes. Am I missing something? The numerical evaluations using real data also do not provide much help for motivation.
- NNZ is not a strong metric when SHD is already provided, and the only edge of the proposed algorithm in the real data experiments is this metric.
- Presentation can be improved. For instance, Theorem 2.1 formulation of the linear SEM is trivial. Similarly, having $N_c$ and $N_x$ separately is superfluous; having a single small variance, not necessarily isotropic noise suffices for the description.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors:
- The authors can elaborate on the Weaknesses item 1.
- An additional question for the presentation. In L103-105, “The high-level intuition is that it can be reasonable to assume that, with the viewpoint of (3), the relevant DAG data is triggered by sparse events on the input size and not by random noise.”
At first, I thought that you only consider a fixed subset of the nodes that have “random noise” with large magnitudes at each sample. If that was the case, the data is not i.i.d, the events are sparse on the input size, but it doesn’t necessarily occur on the same set of variables and I was puzzled with the importance of the proposed mechanism. Then I realized in Theorem 3.1 that at every sample, the non-zero noise variables are chosen randomly and it made sense (please correct me if I misunderstood anything). For a paper that proposes a new data generation model, the presentation should have been cleaner.
- Theorem 3.2.. Given a large enough but finite number of samples, but not a sample complexity result. This is a rather weird statement.
- Note: figure 2 has Möbius as the algo name. Perhaps it’s forgotten in the main text.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations of this work are discussed very clearly in Section 6. I thank the authors for that paragraph.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to reviewer STdi
## Weaknesses
**Motivation for root causes.** Please see our general reply for a better motivation and also our success in a causal discovery competition with real-world data.
**Root causes support.** Note that we assume varying support of the root causes in the dataset $\mathbf{X}$. The example in Fig. 1 shows only one data vector with a specific root cause support of size 2. Because of the varying support, the system cannot be simplified and all nodes are significant, possibly in distinct samples (data vectors).
**NNZ metric.** We will adjust the text to only refer to SHD, SID as indicators for the performance of our algorithm. However, NNZ is valuable as an indicator that the output of the algorithms is not trivial. For example, the empty DAG would result in an SHD of 17 in Table 4 (since the true DAG only has 17 edges). We will clarify this in the text.
**Anisotropic noise.** Yes, one can fuse the noise variables $\mathbf{N}_c$ and $\mathbf{N}_x$ into a single (anisotropic) one:
$$\mathbf{N} =\mathbf{N}_c + \mathbf{N}_x\left(\mathbf{I} - \mathbf{A}\right).$$
However, we prefer to keep them separate since they have distinct and intuitive meanings in a real-world setting: $\mathbf{N}_c$ captures approximately (not exact) sparse root causes, $\mathbf{N}_x$ is the measurement noise always present when obtaining real data $\mathbf{X}$. Also, this allows us to manipulate them separately later in our experiments (Table 1, rows 9, 10).
## Questions
**Fixed support of root causes.** Yes, we have to clarify earlier that the root causes have varying support. We only consider fixed support in one of the synthetic experiments (Table 1, row 13).
**Sample complexity.** Indeed not well written, we will rewrite. In the supplementary material we provide a bound on the number of samples in the noise-free case. Empirically, our algorithm works with much less samples and in the presence of noise, which shows that this bound is very loose.
**Möbius.** It is a mistake, we'll fix.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. My assessment of the paper remains the same. | Summary: This paper presents a new formulation of linear SEM by specifying a structure on the noise variables, which seems to impose zero-inflated distrbutions to achieve the "few roots" modelling goal. Identifiablity is given and guarantee for $L^0$ minimization estimator is provided for a special noise-free case. The minimization problem is further formulated into continuous optimization and numeric experiments are conducted.
Strengths: - The idea of the new formulation is interesting, based on the illustrative river pollution example.
- The experiments consider many different setups, also including real datasets. The proposed algorithm shows comparable performance in the simulation study, though not by much.
Weaknesses: - The motivation of the proposed formulation needs more elaboration, see questions. And the current definitions of "few roots" and "negaligible noise" in (6) are not formal or clear.
- Thm 3.2 only works for noise-free setting, thus there is no guarantee for consistency of $L^0$ minimization estimator for general model (4) beyond noise-free.
- Neither the related work and experiments discuss or compare with constraint-based methods, even the basic PC algorithm. Is it becaus we lose Markov property and faithfulness in this new formulation?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Thm 3.1 is proved by transforming the proposed model into a non-Gaussian DAG model. Just want to clarify: is the proposed formulation covered by the orginal DAG model? If so, are all other DAG learning approaches applicable? In that case, how do the authors justify the superiority of the new formulation beyond the empirical experiments?
- In Figure 2(a) and (b), the SHD and SID for mobius are larger than others at the beginning but drop in the end, any explanation on that?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Lmitations are discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to reviewer NZKD
## Strengths
**Improvement against other methods.** Our algorithm offers significant improvements over the baselines in most scenarios of the simulation experiment (Table 1 and Fig. 2) and especially when the number of nodes in the graph is scaled up (Table 3).
## Weaknesses
**Motivation of our setting.** Please read our general reply for a better explanation of the motivation and also our success in a causal discovery competition on real-world data.
**Assumption on the noise and root causes.** Our assumptions are formally defined in Eq. (6). In words, the root causes are approximately (as captured by $\mathbf{N}_c$) sparse and the data is (as always) subject to measurement noise $\left(\mathbf{N}_x\right)$, where both noise terms are of low magnitude as described in (6). In the concrete experiments we consider concrete numbers for the error energies. We will clarify.
**Theorem 3.2 and the presence of noise.** Indeed, we have included this statement as limitation (b) in section 6.
**Constraint-based methods.** We did not include constraint-based methods since prior work already has shown to outperform them. So we focused on the best baselines. However, we include a comparison against the PC algorithm in the attached pdf, Fig. 1. The PC algorithm performs inferior.
## Questions
**Superiority of our formulation.** Yes, our model still assumes a linear SEM via Eq. (5). Thus, it differs only in the assumption on the distribution and structure of what was viewed before as noise, which we view now as the input provided by each node to produce the DAG data as output (Eq. (6)). In this new formulation we assume (approximately) sparse input and the optimization problem we solve captures that. Thus the proposed input-output viewpoint of the linear SEM and motivates the assumption of having sparse root causes.
**Low data regime.** Yes, it appears that in the low data regime our method performs worse. This is likely due to the sparsity of the root causes (which have varying support by assumption): to identify all edges the root cause supports of the data vectors may need to cover a significant subset of the nodes.
---
Rebuttal Comment 1.1:
Comment: I thanks the authors for the response, which addressed some of my concern. I have increased the score. I hope the authors make corresponding change in the revision, especially, make eq (6) more formal, e.g. the definition seems to be put on the realized data instead of data generating process, which is unconventional; and "significantly larger" is not rigorous math language. | Rebuttal 1:
Rebuttal: # General Comments
We thank all the reviewers for their kind reviews and their effort and interest in understanding and commenting on our work. We will incorporate these comments in an improved revision. Here, we would like to address two main points that arose from the comments.
**Motivation for few root causes**
We view the pollution example as a metaphor that can be moved to various DAG scenarios as roughly sketched next.
For example, as pointed out by reviewer LucQ, the data could measure (amount of) water flow in which case the root causes would capture cities with major input (e.g., through rainfalls).
In gene networks that measure gene expression, few root causes would mean that few genes are activated in a considered dataset.
In a citation network where one measures the impact of keywords/ideas, few root causes would correspond to the few origins of them.
[Peters et. al., 2017] mention in pp. 19-21 the *Principle of independent mechanisms* which in essence makes the assumption that any causal data generation relies on independent mechanisms that turn the input into output. This corresponds to our viewpoint on the linear SEM with the independent root causes as input.
As a thought: with the input/output view of linear SEMs in Eq. (3) one can question why all real world DAG data should be generated from i.i.d. noise $\mathbf{N}$ as input.
**Application in real-world data**
Our method doesn't achieve the best performance on the dataset from [Sachs et. al., 2005], but is reasonably competitive, which is all we wanted to show.
*More importantly*, after the NeurIPS submission our method was among the three winning entries in a causal discovery competition, run by a major pharma company, with the results presented in a (non-archival) report and at a major ML conference (no specifics due to double-blind review). One reviewer commented "... the few root causes assumption may have biological relevance worthy of further investigation." Our method was the only entry that offered some theoretical guarantees and performed well even with a linearity assumption.
Also, motivated by a comment of Reviewer Aevs we did an experiment to estimate the root causes of the dataset from [Sachs et. al., 2005]. In the pdf, Fig. 4 we show the result. The dataset comes with an unweighted adjacency matrix. We chose weights to enforce sparsity on the root causes. The sparsity roughly holds (Fig. 4, right). Interestingly (Fig. 4, left), the root causes tend to have fixed support, which could explain why we are unable to perfectly recover the DAG.
Pdf: /pdf/5a1367b15d5fce5026c9600e8d9cf1b9351d63c6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CrossGNN: Confronting Noisy Multivariate Time Series Via Cross Interaction Refinement | Accept (poster) | Summary: This work first analyzes real-world datasets for multivariate time series forecasting and finds out two problems that are not well handled by previous works: 1) unexpected noise; 2) heterogeneity between variables. A GNN-based model, named CrossGNN, is proposed to fill the gap. CrossGNN consists of three components: 1)Adaptive Multi-Scale Identifier identifies potential periods with FFT and aggregates time series at different scales to construct a mult-scale view of the data; 2)Cross-Scale GNN constructs a graph among scales and uses GNN to capture cross-scale dependency; 3)Cross-Variable GNN captures homogeneous and heterogeneous dependency among variables. Correlation graphs used in Cross-Scale and Cross-Variable GNNs are restricted to be sparse, so the complexity is linear to the input length. Experiments on 8 real-world datasets demonstrate the effectiveness of the proposed model.
Strengths: - This work is well-structured and easy to follow.
- The studied problem, noise in datasets, is an important and practical problem for MTS forecasting.
- Three components in CrossGNN are well-motivated and reasonable. The linear complexity is also an advantage.
- Robustness analysis of noise in section 4.3 is interesting.
Weaknesses: - My main concern is that graphs used in Cross-Scale and Cross-Variable GNNs are static for each dataset. These graphs are constructed by shared learnable vectors with filtering, so they are static and the same for different inputs. This leads to 1) the dependency structure of time steps being fully determined by the positions in the series; 2) the dependency structure of variables being the same for different inputs in a dataset.
- Some recent works (e.g. PatchTST[1], Crossformer[2]) that explicitly model the cross-variable dependency should be compared in the main experiment.
[1]Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations, 2023.
[2]Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In International Conference on Learning Representations, 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In Equation (1), amplitudes are averaged over variables. Is the average operation reasonable? What if different variables have different periodic patterns?
- How is the ouput of AMSI (with shape $L' \times D$) embedded into latent vectors with shape $L' \times D \times C$?
- Settings of the ablation study should be described in detail, perhaps in the appendix. What does "divides the scales directly by fixed length" mean? Does C-Hete only remove the heterogeneous connections? If so, an ablation study that removes both homogeneous and heterogeneous connections should be conducted.
- It is better to compare the efficiency with baselines such as DLinear, Pyraformer, and TimesNet in Figure 6, as it is obvious that the proposed model is more efficient than the full graph. Moreover, the complexity of Pyraformer is $O(L)$, not $O(L \log L)$.
- See my other questions in the weakness part.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer LSa5,
Thanks for your providing a positive feedback to our manuscript and encourage us to make further improvements. Now, we have addressed your concerns by supplementing both experimental studies and concise technical descriptions.
**W1: Fixed Graph Structure for Different Inputs.**
Based on your suggestions, we have analyzed the effects of a fixed dependency structure in CrossGNN from both temporal dimensions and variable dimension.
- **Fixed Graph Structure in Temporal Dimension:**
In a stable physical system, the relationship between time points is usually consistent, exhibiting a fixed change pattern over time. Such consistency can be influenced by noise, inducing the graph signal to be time-varying and dynamic with less regularity. Therefore, we are expected to capture the general and consistent relationship via GNN and learn a fixed temporal graph structure to avoid the impact of such noise.
- **Fixed Graph Structure in Variable Dimension:**
(1) A fixed dependency structure can effectively represent variable relationships across diverse inputs (remain robust under different input disturbance). And such effectiveness has been validated in MTGNN. (2) Our Cross-Variable GNN further decomposes variable-level relationships into positive and negative ones, enabling a more comprehensive and effective description of dependencies. (3) Fixed graph structure can reduce forward computation time.
We aim to learn a stable relationship between different time steps and variables within a fixed physical system (each dataset), reducing noisy effects on dependency structure. The concern you raised is the exact one we have considered in our work, yet the experimental results of CrossGNN can indicate that a fixed correlation matrix tends to be more effective compared to the variable correlation matrix. We appreciate your thoughtful analysis and raising this concern, we will place above analysis at the conclusion of our manuscript.
**W2: Additional Comparative Experiments**
Thank you for pointing out the experimental limitations. We worked day and night, conducting numerous experiments, and have documented the complete results in the **Table 1 and Table 2 of our submitted one-page PDF**.
Experiments demonstrate that our method continues to exhibit excellent predictive capabilities compared to PatchTST and Crossformer. We appreciate your assistance and suggestions in enhancing the persuasiveness of our experiments, and we will include the complete results in the final version of our manuscript.
**Q1: Averaged amplitudes.**
There are two common strategies to achieve multi-scale time-series.
1. **As you mentioned**, each variable derives a variable-wise period based on its own frequency domain amplitudes. Then, each variable obtains distinct period length and subsequently forms variable-wise multi-scale MTS.
2. **In CrossGNN**, we average the frequency domain amplitudes of each variable and uniformly assign the same periods to all the variables, resulting in our multi-scale MTS.
We take the latter strategy for following two reasons:
1. As most variables exhibit similar temporal patterns, the first strategy leads to less distinguishable amplitudes among variables.
2. Given comparable performance, the latter approach is more efficient in implementation, as it allows for parallel operations on all variables. We provide runtime and corresponding prediction metrics for the above two strategies in **Table 6 of the one-page PDF** for your reference. The results demonstrate that averaging the amplitudes in CrossGNN achieves comparable prediction performance while significantly reduces the running time.
**Q2: How is $L^{\prime}\times D \rightarrow \ L^{\prime}\times D\times C$.**
$C$ refers to the (number of) channel dimension. Here, we employ an expansion dimension strategy (using an MLP), to create an embedding for each time step. This strategy is inherited from MTGNN, aiming to enhance the local semantics at each time step and positively impact subsequent cross-scale and cross-variable interactions. Thanks for you question, we will add the explanation in the final version.
**Q3: Ablation Study.**
Thank you for your valuable suggestion regarding ablations.
1. In AMSI, we learn $k$ potential periods of the sequences and perform average pooling based on these $k$ periods to obtain different scale representations. In **C-AMSI**, we remove the adaptive learning of period lengths and directly use $k$ fixed lengths (e.g., $1, 2, 3, ..., k$) for average pooling.
2. Thanks to the insights of reviewers, we add a complete ablation study. **C-CV**: removing both homogeneous and heterogeneous connections. The results are provided in **Table 4 of the one-page PDF.** The results demonstrates that removing the entire Cross-Variable GNN module results in a significant performance degradation.
We appreciate your suggestions for expression clarity and additional ablation experiments. We will incorporate these revisions and additional results to our final version.
**Q4: Training Efficiency & Complexity of Pyraformer.**
1. We have re-evaluated the time complexity of Pyraformer and confirmed it to be $O(L)$. Thanks for pointing it out and we will make the corrections in the corresponding section of the article.
2. Thank you for helping us enhance the validness of our paper. We comprehensively compare the per-batch running time of our model with well-acknowledged deep forecasting models (TimesNet, DLinear, Pyraformer). The results are conducted with the official model configuration, and are recorded in **Table 5 of the one-page PDF.** The results demonstrate CrossGNN the computation time of CrossGNN is significantly better than other models except for DLinear.
Thank you again for your insightful suggestions and reviews.
Authors of Paper 3421
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: Thanks for your detailed response. I appreciate the thorough addressing of most of my concerns, including comparison with additional methods, averaged amplitudes, ablation study and efficiency analysis.
The remaining concern is about graph structure learning: I agree that the graph structure among variables can be fixed as the relation is stable. But the fixed graph for temporal dimension appears somewhat unrealistic, especially since the graph is fully determined by the absolute positions in the input sequence. For example, assume the model tells us that timestamps 2 and 4 are connected in sequence 1. And we get sequence 2 by sliding the window one element to the right (sliding window is a common operation for training time series model). In this scenario, the fixed graph suggests a connection between 2 and 4, which corresponds to the relationship between 3 and 5 in the original sequence. This simple example shows that a static graph structure in the temporal dimension lacks practicality.
Overall, the response makes this work more comprehensive and how to construct the graph for time series in temporal dimension is a complex and non-trivial open problem. I'll maintain my score of 6 for this work and am inclined to recommend that this paper be accepted.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer LSa5,
We would like to thank you for your encouraging follow-up.
- Actually, the idea of a fixed temporal dependency can reasonably represent more stable relationship between temporal steps, and thus filter out the noise and unstable components, where the experimental results have also demonstrated such effectiveness. It is inspired by DLinear [1], which essentially learns a fixed temporal dependency structure by a simple linear layer.
- Regarding the issue you raised that the input sequential order will impact the dependencies, it can be addressed by a temporal positional embedding strategy, where we can train a temporal positional embedding and obtain the variable-wise adjacent matrix dependent on such trainable position embedding. When testing, we can look up to the embedding dictionary to dynamically construct the temporal point-wise correlations for a more flexible testing process. We will incorporate this positional embedding idea into our manuscript with both detailed strategy description and experimental results to further improve the scalability and flexibility of our solution.
[1] Are transformers effective for time series forecasting? AAAI,2023.
We sincerely thanks you for your constructive advice and great help on our manuscript!
Authors of Paper 3421 | Summary: CrossGNN is a linear complexity GNN model designed for MTS forecasting, addressing two obstacles: self-attention mechanisms assigning high scores to outlier points and real-world data homogeneity and heterogeneity. By combining Adaptive Multi-Scale Identifier (AMSI), Cross-Scale GNN, and Cross-variable GNN, CrossGNN outperforms recent SOTA methods in real-world datasets.
Strengths: 1. Despite considering the relationship between variables, unlike Transformer-based models, it has a low time complexity O(L).
2. The paper also includes a comprehensive set of ablation studies and the performance in forecasting is SOTA.
Weaknesses: 1. The authors mentioned that noise can disturb attention mechanism in transformer. However, the noise ratio is less than 2\% as shown in Figure 1 (b) and I do not think this would be significant. Also, those outliers can be removed by using outlier detection techniques before training.
2. It is necessary to add an explanation or experiments to assure that the self-attention mechanism really adversely affects performance by assigning outlier points as high scores.
3. Technical novelty is a bit weak. Concepts of cross-scale and cross-variable are already introduced concepts in TimesNet and Crossformer. Also, many papers have already used GNN to deal with cross-variable interaction. What is the main difference except the input?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In Figure 2, all scales are concatenated in time dimension from coarse to fine scale. Why this kind of concatenation is needed? Is there any reason for this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: There seems no limitation in this paper. Please include one.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer MKp9,
Thank you for your insightful advice for polishing our manuscript. We have conducted sufficient experiments and analysis to dispel your concerns. The details can be found below.
**W1: Noisy illustration issue in Figure 1 (b) & Whether it works when outliers removed.**
(1) This is an illustrative example in Figure 1 (b). We allow the noise increase and present the varying results as illustrated in **Appendix B.1** to verify the effectiveness of CrossGNN. It demonstrates CrossGNN exhibits good robustness against noisy data and has a great advantage when dealing with unexpected noise.
(2) The significance of denoise learning and reasons for not ruling out outliers directly:
- **Noise can be inevitable in real world.** Figure 1 (b) provides an illustrative example based on four academic datasets. It is possible that the noise ratio in our actual application may be much higher than the reported 2%.
- **Current anomaly detection algorithms may not necessarily detect all types of noise.** The noise can be summarized into two scenarios: persistence influence (e.g. the traffic accidents), instantaneous influence (e.g., sensors distortion). Some noise may exhibit sudden increases or decreases, while others may be continuous, low-amplitude variations.
- **Directly filtering out outlier noise can result in significant information loss.** If we remove all the noisy samples detected by the noise detector, the remaining available samples may not provide sufficient support for training.
In addition, we perform experiments by removing outlier points with noise detection algorithms [1] on four datasets. The prediction results are shown in **Table 3 of the one-page PDF**. We find that directly removing noisy samples have not improved the prediction performance.
[1] Robust regression and outlier detection. 2009.
**W2: Explanation and experiment of noise effect on self-attention.**
Thank you for highlighting the importance of validating the impact of noise on the self-attention mechanism. We have investigated the experimental results on four datasets and provide the explanation to your concerns from both theoretical analysis and empirical perspectives.
**Theoretical analysis:**
In the self-attention mechanism, for a given input time series data $X=[x_1,x_2,x_3,...,x_L]\in R^{L\times D}$, where $L$ denotes the sequence length and $D$ represents the embedding size, the attention score of the $i$-th time step with respect to the $j$-th time step can be expressed as $a_{i,j}=\frac{e^{(x_i\times q)\cdot (x_j \times k)}}{\sum_{k=1}^{L}e^{(x_i\times q)\cdot (x_k \times k)}}$, where $q\in R^{D \times C}$ and $k\in R^{D\times C}$ are the query and key matrices, respectively. It can be observed that the score $a_{i,j}$ depends on the value of $x_j$. When $x_j$ is a high-value outlier and the weights $q$ and $k$ are positive, $a_{i,j}$ will be assigned with a larger value.
**Experimental results:**
**(1) High score.** We trained predictive models using a Vallina Transformer on the ETTh1, Traffic datasets. **In Figure 1 of the one-page PDF,** the randomly sampled sequences and their corresponding attention matrices demonstrate that applying the self-attention mechanism to time series data can indeed assign higher scores to outlier points.
**(2) Performance degradation.** To evaluate the robustness of CrossGNN and transformer-based models against noise, we add different intensities of Gaussian white noise to the original MTS and observe the performance changes. **These results are in Appendix B.1.** We found that by directly reducing the signal-to-noise ratio (increasing the noise intensity), the performance of the transformer model decreased rapidly, while CrossGNN decreased more slowly. It means noise will have a greater negative impact on the prediction performance of transformer-based models.
We also observed in recent work [2] that visual Transformers might excessively focus on a few abnormal tokens, termed the "token overfocusing phenomena." This aligns with our perspective. We will include these experimental results in the final version, thanks for your suggestion to enhance the quality and validness of our manuscript.
[2] Robustifying Token Attention for Vision Transformers. ICCV,2023.
**W3: Technical novelty.**
Thank you for your question. We have provided a detailed description of the technical novelties of CrossGNN in the global rebuttal and explained the differences between CrossGNN and other methods such as TimesNet, Crossform, and GNN-based.
- **Time-interaction**.
1. **First attempt to explore GNN for temporal modeling**: Exploit GNN to connect different temporal points.
2. **First attempt to propose Cross-Scale information focus**: Refine interaction learning across different temporal scales to reduce noise effects.
3. **Adaptable Multi-Scale MTS Acquisition**: An Adaptive Multi-Scale Identifier to dynamically partition scales based on sequence frequency characteristic.
- **In variable interaction.**
1. **Pioneering Temporal Data Heterogeneity.**
2. **Low computation complexity.**
These systematical novel solutions can jointly resolve the new problem of reducing noise in time-series and contribute to noisy-reduced learning scheme. Detailed descriptions of these technical novelty and differences from other models can be found in our common responses to all reviewers.
**Q1: Concatenation of Scales.**
The reasons can be three aspects.
1. The direct concatenation can well preserve the full multi-scale information.
2. To guarantee cross-scale interaction, concatenating sequences of varying scales allows direct matrix multiplication for time-step level interactions between different scales.
3. This concatenation leverages the neuron connections and learnable weights of Fully Connected (FC) layers to facilitate feature extraction.
Authors of Paper 3421
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thanks for your effort on the rebuttal. My concerns about noise are resolved, but I think technical novelty is still not enough.
1. Adaptive Multi-Scale Identifier exploits top-K frequency in Fourier Transform, which is very similar to temporal 2D-variation transformation of TimesNet. Plus, there is no justification why this kind of temporal scaling is selected. Is there any justification on selecting top-k frequency?
2. I doubt that this work is a combination of GNN and TimesNet. Is there any specialized component in GNN for this work?
3. The effect of GNN seems not in removing outliers as outlier ratio is small in evaluation datasets. If so, I do not understand the effect of modeling cross-scale interaction.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer MKp9 1/2
Comment: Dear Reviewer MKp9,
Thank you for reviewing our response and taking time to raise your questions. We will further elaborate on the novelty of our paper and provide detailed responses to your inquiries:
**Q1. Justification on selecting top-k frequency.**
**1.1 Top-k is necessary.**
Actually, top frequency indicates the principal components in time-series, where we transfer the series expression from temporal to frequency domain. Considering the sparsity of frequency domain and can avoid the noise brought by meaningless high frequencies \[1\]\[2\], we only select the top-$k$ amplitude values and obtain the most significant frequencies where $k$ is a hyperparameter. Through **hyperparameter experiments** on weather and traffic datasets (**in Figure 5(a) of the manuscript**), we found that setting $k$ as 5 achieves favorable results. Hence, we believe $k=5$ fits our model.
**1.2 AMSI is novel and effective.**
AMSI exploits top-$k$ frequency in Fourier Transform to capture periodicity. Our periodicity extraction module adaptively and efficiently captures different scales of time series by integrating an additional average pooling, which aggregates period information into scale-level. The combination of automatically obtained periods and average pooling mechanism is **a novel and effective approach to obtain multi-scale MTS**, where it generates more suitable scales based on input data variations and better captures the evolving patterns of the input data.
[1] The analysis of time series: an introduction. 1981.
[2] FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In ICML, 2022.
**Q2. CrossGNN is not just combination of GNN and TimesNet**
We will clarify the distinctions from three aspects, **1) Specialized component of CrossGNN**, **2) Differences from TimesNet**, **3) Differences from GNN-based models**.
**2.1 Specialized component in GNN for this work**
**We are the first to explore a lightweight and efficient pure GNN-based time series prediction model.** Notably, we introduce novel designs in GNN modeling for time series data:
1. In the variable dimension, we are the first to separate variable relationships into homogeneous and heterogeneous graphs, enhancing the effectiveness of learning complex interactions.
2. In the temporal dimension, we are the first to apply GNNs for stable time relationship learning, focusing on associations across different scales.
3. In both time and variable dimensions, we employ pruning to remove less relevant edges, resulting in linear complexity for our GNN model.
**2.2 Differences from GNN-based models.**
GNN-based models typically operate on spatio-temporal data with pre-defined topological structures, leveraging pre-determined graph connections to learn spatial patterns and employing TCNs for temporal patterns. However, they overlook: **(1) heterogeneity among variables, (2) complexity of $O(N^2)$, without graph pruning for efficiency, (3) the potential of GNN-based approaches in capturing temporal relationships**, fail to exploit GNN to model series along temporal dimensions.
**2.3 Differences from TimesNet.**
CrossGNN is fundamentally distinct from TimesNet, with the only commonality being the utilization of frequency domain information to extract periods.
- Although the solution to obtaining periods can be replaced by techniques such as power spectral density diagrams and autocorrelation analysis, we choose FFT for obtaining periods as it is faster and its effectiveness has been validated in other works [3].
- Regarding differences, in AMSI, we extract periods adaptively to capture diverse time scales through average pooling, fostering interaction between different scales. In addition, we capture the heterogeneous interactions among variables via extracting stable relations through GNN, and achieve a lightweight model through pruning. However, in contrast, TimesNet transforms sequences into 2-D inputs for large CNN-based backbones, limited to capturing adjacent period information and incurring substantial memory and computational costs.
Furthermore, we provide comparisons on two datasets of memory usage, runtime, and MSE metric as follows:
**ETTm2:**
| Model | CrossGNN | TimesNet |
| ---------------- | ------------ | -------- |
| Time (per batch) | **15.6 ms** | 205.4 ms |
| Memery | **1.367 GB** | 3.285 GB |
| MSE | **0.309** | 0.340 |
**Weather:**
| Model | CrossGNN | TimesNet |
| ---------------- | ------------ | -------- |
| Time (per batch) | **20.4 ms** | 309.3 ms |
| Memery | **2.674 GB** | 5.727 GB |
| MSE | **0.159** | 0.172 |
These results show that CrossGNN reduces running time by more than 10 times and saves memory by more than half, while the prediction performance is better.
[3] Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. NeurIPS, 2021. | Summary: Overall Comment:
This article addresses two issues in multi-variate time-series modeling: i) How to address signal noise in multivariate time series, and ii) How to address interactions between multiple variables to extract information. The article proposes two GNN models to solve these problems, including the Cross-Scale GNN model for addressing signal noise and the Cross-Variable GNN for addressing interactions between different signals.
Overall, while some parts of the article are not very clear, such as, what is the meaning of homogeneity and heterogeneity in time series, and the the ablation study can be improved, the article's innovative ideas are clear and worth being seen by more people. Compared to conventional baseline methods, this article has significant advantages.
The strengths of the article include:
1. The paper conducts a large number of experiments, tested on 8 datasets. From the experimental results, the proposed method is effective and performs better than conventional MTS baseline methods.
2. The article is well written and easy to read, making it easy to understand the information that the author is trying to convey.
The weaknesses of the article include:
1. The analysis of the ablation study needs to clarify that which module, the Cross-Scale GNN and Cross-Variable GNN, plays a larger role in the model in order to judge whether it is filtering signal noise that makes the proposed model perform better on each dataset or whether introducing interactions between multiple variables is the key factor.
2. Correspondingly, if the Cross-Scale GNN model for addressing signal noise plays a larger role, is the focus of the article on time series denoising or modeling time series with noise? The article should be compared with methods specifically designed for denoising time series, rather than just conventional time series modeling baselines.
3. What is the meaning of homogeneity and heterogeneity in time series signals? In Figure 1(d), only positive or negative correlations between variables are displayed, which is not related to homogeneity and heterogeneity relationships between variables.
Strengths: The strengths of the article include:
1. The paper conducts a large number of experiments, tested on 8 datasets. From the experimental results, the proposed method is effective and performs better than conventional MTS baseline methods.
2. The article is well written and easy to read, making it easy to understand the information that the author is trying to convey.
Weaknesses: The weaknesses of the article include:
1. The analysis of the ablation study needs to clarify that which module, the Cross-Scale GNN and Cross-Variable GNN, plays a larger role in the model in order to judge whether it is filtering signal noise that makes the proposed model perform better on each dataset or whether introducing interactions between multiple variables is the key factor.
2. Correspondingly, if the Cross-Scale GNN model for addressing signal noise plays a larger role, is the focus of the article on time series denoising or modeling time series with noise? The article should be compared with methods specifically designed for denoising time series, rather than just conventional time series modeling baselines.
3. What is the meaning of homogeneity and heterogeneity in time series signals? In Figure 1(d), only positive or negative correlations between variables are displayed, which is not related to homogeneity and heterogeneity relationships between variables.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: As I write in the Weaknesses:
1. the Cross-Scale GNN and Cross-Variable GNN, which plays a larger role in the model? Whether it is filtering signal noise that makes the proposed model perform better on each dataset or whether introducing interactions between multiple variables is the key factor?
2. What is the meaning of homogeneity and heterogeneity in time series signals? In Figure 1(d), only positive or negative correlations between variables are displayed, which is not related to homogeneity and heterogeneity relationships between variables.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors didn't address the limitations clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer zRqD,
Thank you for your valuable insights for polishing our manuscript. We have conducted additional experiments and analysis to address your concerns.
**W1&Q1 Which module is more important.**
Thank you for your question in our experimental analysis. Our ablation experiments in manuscript section 4.4 have revealed that the Cross-Scale module holds greater significance. Across twelve settings in three datasets, the removal of Cross-Scale GNN resulted in performance degradation in all settings, positioning it at the bottom of the rankings in ten settings. This pivotal finding emphasizes the effect of Cross-Scale GNN on modeling temporal relationships, effectively filtering signal noise, and enhancing downstream prediction outcomes. In the final version, we will emphasize the importance of the Cross-Scale module in the Ablation section, acknowledging your guidance in polishing our experimental analysis.
**W2: Add comparative baselines & Robust analysis on noise.**
Thank you for your valuable suggestions.
- We have incorporated comparisons with denoising time series prediction methods, such as Stationary (a model designed for unstable time series), and presented the experimental results **in Table 1 and Table 2 of one-page PDF.** The results demonstrate that our CrossGNN still outperforms other denosing time series prediction methods.
- Additionally, to validate CrossGNN performance in time series prediction with noisy data, we introduced varying levels of Gaussian white noise to three datasets (ETT2, ETTm2, and Weather). The experimental outcomes in **the Appendix B.1** illustrate that as the intensity of Gaussian white noise increases, the signal-to-noise ratio (SNR) gradually decreases from 100 dB to 0 dB. As the SNR decreases from 100db to 0db, the mean square error (MSE) of CrossGNN increases more slowly than MTGNN and ETSformer. The quantitative results demonstrate that CrossGNN exhibits good robustness against noisy data and has a great advantage when dealing with unexpected fluctuations. We speculate such improvements benefit from the explicit modeling of respective scale level and variable-level interactions.
**W3&Q2: Meaning of homogeneity and heterogeneity.**
Thank you for your advice in improving our expression. In temporal signals, heterogeneity indicates that the temporal patterns between variables are not similar, while homogeneity indicates similar temporal patterns between variables. In the manuscript, we use negative correlation to represent heterogeneous variables and positive correlation to represent homogeneous variables.
Thanks for providing the encouraging reviews and your valuable suggestions indeed make sense to further improve our manuscript. Thanks again!
Authors of Paper 3421
---
Rebuttal Comment 1.1:
Title: Requesting an update from Reviewer zRqD following the authors' rebuttal
Comment: Reviewer zRqD,
As the discussion period is nearing its end, please read the authors' response to your comments on Paper 3421 and indicate whether your concerns are addressed. | Summary: This paper aims to deal with the temporal fluctuations and heterogeneity between variables, caused by unexpected noise, for better multivariate time-series forecasting. Specifically, the authors propose a linear complexity CrossGNN model, including Cross-Scale GNN which captures relationships inter- and intra- scales, and Cross-Variable GNN which captures the homogeneity and heterogeneity relationships between different variables. Experiments on 8 benchmark multivariate time-series datasets demonstrate the effectiveness of CrossGNN over some existing methods.
Strengths: 1. The authors propose a GNN-based (CrossGNN) method with linear complexity for long-term time series forecasting.
2. The CrossGNN captures the relationships between both scales and variables.
3. The paper is easy to understand.
Weaknesses: 1. The proposed CrossGNN does not compare with the SoTA methods, like RLinear, RMLP [1] and PatchTST [2]. I checked the results in this paper (Table 1 & Figure 4) and [1,2], and found that CrossGNN is worse than RLinear, RMLP and PatchTST.
[1] Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping
[2] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
2. It is unclear why the heterogeneity between variables is caused by unexpected noise.
3. In the top of Page 5, it is unclear why the production of two learnable vectors can diminish the effect of noise.
4. What is the meaning of "C" in Figure 2?
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Please compare with SoTA methods, like RLinear, RMLP and PatchTST.
2. Please give explanations on the weaknesses 2~4.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer F3ix,
Thanks for your valuable comments for improving our manuscript. Firstly, we have incorporated a comparison with the SoTA baselines. Subsequently, we have seriously addressed and clarified the concerns you raised as below.
**W1: Lack of SoTA baselines.**
Based on your suggestion, we are working diligently to conduct experiments and have added the abovementioned three SoTA baselines for comparision **in Table 1 and Table 2 of the one-page PDF**.
Nevertheless, directly comparing our solution with RMLP, RLinear, PatchTST may be unfair.
The reasons can be two aspects,
1. **Publication date.** The publication date of RLinear, RMLP (18th, May) on arxiv is after the deadline (17th, May) of NeurIPS.
2. **Different experimental settings.** Our CrossGNN takes 96 time steps as input while the input of RMLP, RLinear, PatchTST are 336, which are longer than ours. It is obvious that much longer sequence can lead to higher performance but deterioriates the efficiency.
Finally, we emphasize that we now conduct fair comparison experiments in the following ways:
- **Input length 96:** We directly compared CrossGNN with PatchTST, RMLP, RLinear with the same input length.
- **Input length 336:** Similar to RMLP, RLinear, and PatchTST, we also incorporated the **RevIN** [3] technique in CrossGNN to mitigate data shift effects.
The final experimental results (**in Table 1 and Table 2 of the one-page PDF**) demonstrate that our CrossGNN still outperforms recent SoTA methods, simultaneously maintaining linear complexity.
Thank you again for suggestions in improving our experiments. We will include the complete experimental results in the final version and cite the two referenced papers \[1\]\[2\] accurately.
[1] Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping
[2] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
[3] Reversible instance normalization for accurate time-series forecasting against distribution shift. ICLR, 2022.
**W2: Heterogeneity between variables caused by unexpected noise.**
**Heterogeneity between variables:** Referring to some variables exhibiting distinct temporal patterns. In a stable physical system, most variables exhibit homogeneous correlations. Heterogeneity generally arises from unexpected noise, this can be summarized into two scenarios:
**(1) Persistent influence:** Continuous noise, such as emergency events, leads to sustained changes in the temporal patterns of certain variables, resulting in disparate time series. For instance, in the Traffic dataset, some traffic intersections exhibit higher nighttime traffic than daytime due to accidents during the day.
**(2) Instantaneous influence:** Brief yet high-frequency noise, like sensor distortions, introduces irregular data points into the time series of certain variables, reducing the overall sequence regularity.
The above kinds of noise can affect the time signal as two aspects, 1) different variables must exhibit various evolutionary patterns, 2) the correlations between variables are varying along time,thus we design Cross-Variable module to effectively enhance the role of cross-variable relationship learning in time series prediction.
**W3: The production of two learnable vectors.**
Eq(6) at the top of Page 5 aims at learning stable and cross-scale temporal correlations $E^{scale}\in R^{L'\times L'}$ not affected by input noise. Here, we explain why it can mitigate the impact of input noise on learning temporal correlations:
- **Input-independent**: Previous works learns the correlation weight by considering the correlation as the function of input $X$ while in this paper, we initialize the learnable correlation $E^{sacle}$ as the production of two learnable vectors $vec_1^{scale}$ and $vec_2^{scale}$, which are independent of $X$. Thus, it cannot be disturbed by noisy $X$ and directly learn the correlation regularity from general data sequences.
- **Cross-scale**: We extend $E^{scale}$ into multiple scales with coarse sequence, as shown in main text Figure 1(c), where the insight is that coarse temporal scale extracts the backbone of regularity and thus less influenced by noise.
Besides, we can also verify the significant motivation of our denoising insight from the analysis of self-attention. Through extensive experiments of noise impact on learning long temporal correlations, we observed cases where abnormal noise received excessive attention, as depicted in **Figure 1 of one-page PDF**. Thus, exploring dependency structure independent of inputs noise is crucial.
**W4: The meaning of "$C$" in Figure 2.** $C$ refers to the (number of) channel dimension. Here, we employ an expansion dimension strategy, creating an embedding for each time step. This strategy is inherited from MTGNN [4], aiming to enhance the local semantics at each time step and positively impact subsequent cross-scale and cross-variable interactions. Thanks for you question, we will add the explanation in the final version.
[4] Multivariate time series forecasting with graph neural networks. SIGKDD, 2020.
Authors of Paper 3421
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your responses.
Is there any reason why only the results of 4 datasets are shown in Table 1 and Table 2 of the one-page PDF?
For Input length 336, is it possible to show the results of CrossGNN without RevIN? It is better to understand the performance improvement comes from the proposed CrossGNN or RevIN, when comparing with PatchTST.
---
Reply to Comment 1.1.1:
Title: Response to follow-up questions of Reviewer F3ix (Part 1/3)
Comment: Dear Reviewer F3ix,
Thank you for taking the time with further discussions to polish our manuscript. At this time, we provide more detailed clarification and experiments to address concerns you've raised.
**1. Only 4 datasets are recorded in the one-page PDF.**
Due to space limitations, our initial response only provided empirical results on four representative datasets: two smaller ETT datasets, ETTh2 and ETTm2, and two larger datasets, Weather and Traffic. The results demonstrate the performance of CrossGNN still outperforms RLinear, RMLP, and PatchTST under equivalent settings. In this report, we present a more comprehensive evaluation of CrossGNN against PatchTST across all eight datasets.
**2. Analysis on the comparison between PatchTST and CrossGNN**
We set the input length to 336 and compare the performance of PatchTST (w,w/o RevIN) and CrossGNN (w, w/o RevIN) on following eight datasets. The following observations are made:
**2.1 Forecasting performance:**
**(1) w/o RevIN on two models, CrossGNN demonstrates an average performance 8.36% gain on MSE and 5.8% gain on MAE against PatchTST. (2) When both models incorporating RevIN, CrossGNN exhibits an average performance improvement of 3.42% MSE and 2.5% MAE against PatchTST.** This highlights not only the superior predictive capabilities of CrossGNN but also its advantage in handling sequence anomalies (considering the instance normalization nature of RevIN). Moreover, as the reported results of PatchTST are based on RevIN, we directly compare the performance of CrossGNN without RevIN against PatchTST with RevIN on the settings of 64 settings on 8 datasets. **The results show that in 50/64 settings, CrossGNN without RevIN still outperforms PatchTST with RevIN.** This verifies the inherent superiority of CrossGNN in predictive performance compared to PatchTST.
**2.2 Computational Efficiency:**
We record the per-batch runtime and memory usage for PatchTST and CrossGNN under identical settings. **The results reveal that CrossGNN not only outperforms PatchTST in performance, but also achieves 10 times faster runtime and 8 times less memory consumption than PatchTST.** This advantage is attributed to the lightweight pruning employed in GNN..
**The detailed results are as below.**
**Table 1. Efficiency on 8 datasets.**
| PatchTST | ETTh1 | ETTh2 | ETTm1 | ETTm2 | Traffic | Elec | Weather | Exchange |
| ----------- | :---: | :---: | :---: | :---: | :-----: | :----: | :-----: | :------: |
| Time (ms) | 54.0 | 53.9 | 54.1 | 53.8 | 1692.2 | 191.7 | 141.0 | 75.7 |
| Memory (GB) | 3.662 | 3.645 | 3.647 | 3.634 | 79.674 | 39.162 | 4.732 | 3.837 |
| CrossGNN | ETTh1 | ETTh2 | ETTm1 | ETTm2 | Traffic | Elec | Weather | Exchange |
| ----------- | :---: | :---: | :---: | :---: | :-----: | :---: | :-----: | :------: |
| Time (ms) | 15.6 | 15.7 | 15.4 | 15.7 | 84.3 | 59.5 | 21.8 | 18.3 |
| Memory (GB) | 1.366 | 1.372 | 1.367 | 1.377 | 4.986 | 3.564 | 2.647 | 2.165 |
**Table 2-9. Prediction results with 336 input length on 8 datasets.**
| Exchange (MSE\|MAE) | 96 | | 192 | | 336 | | 720 | |
| ------------------------ | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: |
| CrossGNN w/o RevIN | $\underline{0.079}$ | 0.198 | $\underline{0.168}$ | $\underline{0.292}$ | $\underline{0.319}$ | $\underline{0.399}$ | $\underline{0.650}$ | $\underline{0.597}$ |
| CrossGNN +RevIN | **0.076** | **0.193** | **0.163** | **0.289** | **0.311** | **0.396** | **0.641** | **0.590** |
| PatchTST w/o RevIN | 0.089 | 0.211 | 0.179 | 0.304 | 0.341 | 0.413 | 0.792 | 0.648 |
| PatchTST + RevIN | 0.080 | $\underline{0.196}$ | 0.171 | 0.294 | 0.327 | 0.408 | 0.656 | 0.602 |
| ETTh1(MSE\|MAE) | 96 | | 192 | | 336 | | 720 | |
| ------------------ | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: |
| CrossGNN w/o RevIN | $\underline{0.374}$ | $\underline{0.398}$ | $\underline{0.405}$ | $\underline{0.418}$ | **0.410** | **0.415** | $\underline{0.445}$ | $\underline{0.459}$ |
| CrossGNN + RevIN | **0.363** | **0.392** | **0.401** | **0.409** | $\underline{0.411}$ | $\underline{0.416}$ | **0.440** | **0.453** |
| PatchTST w/o RevIN | 0.388 | 0.412 | 0.429 | 0.436 | 0.456 | 0.459 | 0.493 | 0.499 |
| PatchTST + RevIN | 0.375 | 0.399 | 0.414 | 0.421 | 0.431 | 0.436 | 0.449 | 0.466 | | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to thank you for your valuable time and constructive comments on our manuscript and we have made sufficient improvements of our work according to your comments. Here we list the major improvements below.
- **Enhanced Clarifications**: We have enhanced our manuscript with more in-depth explanations where necessary, such as an explanation of ablation studies.
- **Additional Experiments**: Based on the feedback, we have conducted additional experiments to further validate the effectiveness of our proposed model. These include but not limited a comparison with recent SoTA method such as RLinear, PatchTST, Crossformer, a comparison of running time among well-acknowledged deep forecasting models.
- **Clearer Definitions**: We have added clear definitions for terms like 'heterogeneity between variables' to ensure readers have a comprehensive understanding of our method.
**Technical novelty of CrossGNN.**
The contribution of CrossGNN extends highlighting the challenges of unexpected noise and variable-wise heterogeneity. It also introduces significant innovations in enhancing interactions across time and variable dimensions. Next, we will elaborate our technical novelty and highlight the distinctions between CrossGNN and other SoTA methods.
- **In time interaction.**
1. **First attempt to explore GNN for temporal modeling.**
To the best of our knowledge, this is the first attempt to explore GNN for capturing temporal relationships, which is beyond capturing variable-level correlation in previous literature. We elucidate how to exploit GNN to connect different temporal points to learn a stable and general temporal graph, which can be free from noise interference.
2. **First attempt to propose Cross-Scale interaction.**
We emphasize the significance of point-level temporal interactions across scales. Based on the insight that coarse temporal scale extracts the backbone of regularity, we introduce the cross-scale module, which directly extracts coarser-scale information and extends the temporal learning towards multiple scales. CrossGNN allows refined interaction learning across different temporal scales, resulting in noise-reduced effects.
3. **Adaptable multi-scale MTS acquisition.**
We devised an Adaptive Multi-Scale Identifier (AMSI), capable of dynamically partitioning scales based on input sequence frequency characteristics. This distinguishes our approach from prior methods that rely on fixed-length manual scaling, contributing to an automatic multi-scale sequence acquisition.
- **In variable interaction.**
1. **Pioneering temporal data heterogeneity.**
We first propose Cross-Variable learning via decoupling temporal homogeneous-heterogeneous relationships.
2. **Low computation complexity.**
We maintain the computational complexity of both the Cross-scale and Cross-variable GNN modules at a linear level by constraining the number of inactive edges.
In summary, we outline the distinctions and innovations that set us apart from the similar methods the reviewer mentioned:
- **Differences from TimesNet.** Even TimesNet extends 1-D series data to a 2-D space, it still lacks explicit modeling interactions across different temporal scales.
- **Differences from Crossformer.** Crossformer solely focuses on homogenous associations and ignores heterogeneity in the real world. Besides, the time complexity of Crossformer is $O(rDL)$ ($r$ for router vectors, $D$ for variable count, $L$ for sequence length), while CrossGNN is $O(D)$.
- **Differences from GNN-based Approaches.** GNN-based time series prediction methods merely capture cross-variable relationships. However, they overlook (1) heterogeneity among variables, (2) complexity of $O(N^2)$, without graph pruning for efficiency, (3) the potential of GNN-based approaches in capturing temporal relationships, fail to exploit GNN to model series along temporal dimensions.
We will polish the technical novelty in our manuscript based on above clarification.
Finally, we believe that these revisions have significantly improved our manuscript. We hope that our responses and the changes made address the concerns of the reviewers adequately.
Once again, thank you for your time and effort in reviewing our work. We look forward to your continued feedback.
Best regards,
Authors of Paper 3421
Pdf: /pdf/bf3f50c76a4005a2725e6029342070923cf84390.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Proximity-Informed Calibration for Deep Neural Networks | Accept (spotlight) | Summary: This paper quantifies and proposes a mitigation for a phenomenon in DNN training, where more 'unusual' examples (here defined as having a higher average distance to its K=10 nearest neighbors) are generally more miscalibrated across a range of models and tasks. The authors propose a new proximity-aware calibration metric, PIECE, and demonstrate that it can capture calibration issues that are not necessarily captured by the standard ECE metric (and, in fact, always P>= ECE). Additionally, the authors propose a mitigation, ProCal. This mitigation is presented in two variations, one for continuous and one for discrete confidence. These work by adjusting the uncalibrated probability score based on the model's miscalibration on examples with that average distance.
Strengths: The paper is very clearly written, with a logical flow and good explanations of all steps taken. I particularly enjoyed the motivation via experiments on existing models leading to theory-driven proposals leading to further experimental verification. I also appreciated the through ablation study and the extra attention given to OOD examples.
In addition, the original miscalibration problem that the paper brings up seems to me to be important and relevant, and is well-motivated. As I am not an expert in miscalibration mitigation techniques I cannot fully comment on the novelty aspect of the work, though I have not seen this 'atypicality' concern brought up explicitly before; it also dovetails neatly with other research in atypical examples.
The proposed metric and mitigations, while very simple, are logical and effective, which I hope will lead to their adoption.
Weaknesses: I felt that the 'atypical' (high $D(X)$) examples could have been better characterized. In particular, the authors relate these examples to underrepresented categories in datasets (eg, Black people in health datasets), but this is not necessarily so. It does not seem incredible to have a scenario where a data is divided into two clusters, where the smallest cluster is nevertheless very tight, and so has a smaller average $D(X)$.
It is not really clear what the ProCAL method refers to. It seems like it mostly splits into two methods each of which have their own name. Maybe it would be better to call them ProCAL-C and ProCAL-D (for continuous and discrete)?
The analysis of ProCAL effectiveness seems a little sparse in claiming "our method consistently improves the calibration"... My interpretation of the data would be that ProCAL is very helpful in conjunction with raw confidence as well as some of the other methods with higher PIECE scores (TS variants), but not as effective for methods such as IR and MIR, where it seems to hurt as much as help. I don't think that this disqualifies the paper, but a more thorough explanation and analysis of this would be appropriate.
(minor) in section 3.1, what is the sensitivity of two points having the 'same' confidence?
(minor) It is not clear why the definition of D(X) contains an exponent, rather than the simple average of the ten smallest distances.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the weakness section above. In addition:
* For cases where the test distribution doesn't match the train distribution, how much are we losing by using the test data to estimate values of D (versus measuring the distance of a test sample point to the closest 10 training points)? It would seem that the training distribution is actually the important one for estimating atypicality.
* How does the Bias Index changed if only $B_H$ is used for sampling points and $B_L$ is used for matching, and vice versa? What if the intermediate groups are used?
* Is it possible to characterize the $B_H$ and $B_L$ groups? Is there a large difference of average confidence between the two, and how well do their confidences intersect?
* Why is ProCAL less effective in conjunction with IR and MIR?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate reviewer 4HP9's constructive feedback. We are glad the reviewer enjoyed the logic flow of this paper, finds the problem important and the solution simple, logical and effective.
**Q1: I felt that the 'atypical' (low proximity) examples could have been better characterized. In particular, the authors relate these examples to underrepresented categories in datasets (eg, Black people in health datasets), but this is not necessarily so. It does not seem incredible to have a scenario where the data is divided into two clusters, where the smallest cluster is nevertheless very tight, and so has a smaller average.**
This is indeed a thoughtful question. First, we want to clarify that 'low proximity' in our definition refers to 'low density', but the context of 'underrepresented' here links more to 'low probability', which are correlated but not equivalent.
Second, we agree with the reviewer that our existing definition of proximity does not include samples with low probability but high density.
It is still an open question whether these samples underrepresented but with high proximity suffer from proximity bias (i.e., are low probability samples inherently more overconfident than high probability samples) and whether they need mitigation. We leave the investigation of these cases and a better characterization to the future work.
**Q2: It is not really clear what the ProCAL method refers to. It seems like it mostly splits into two methods each of which have their own name. Maybe it would be better to call them ProCAL-C and ProCAL-D (for continuous and discrete)?**
Thank you for your suggestion. The intention behind the term 'ProCAL' is to represent the general strategy and a series of methods for mitigating proximity bias. Since we've tailored two specific methods for different output types, we've referred to these as Density-Ratio Calibration and Bin-Mean-Shift. We will clarify this in the paper.
**Q3: why the definition of D(X) contains an exponent, rather than the simple average of the ten smallest distances.**
This is because the current definition normalizes the distance measure from a range of $[0, \inf]$ to $(0,1]$, making the approach more robust to the effects of distance scaling since the absolute distance in Euclidean distance can cause instability and difficulty in modeling.
**Q4: For cases where the test distribution doesn't match the train distribution, how much are we losing by using the test data to estimate values of D**.
First, we want to clarify that we actually use a validation set rather than a test set for estimation atypicality (i.e., proximity), which we assume is iid drawn from the training distribution. Besides, in most cases, including settings with distribution shift, we do not presume to have knowledge of the distribution-shifted data, and therefore we have no access to the test data.
However, in case we can access the test dataset to estimate the proximity, how much we are losing is largely dependent on how far the test distribution deviates from the training distribution.
**Q5: Why is ProCAL less effective in conjunction with IR and MIR?** My interpretation of the data would be that ProCAL is very helpful in conjunction with raw confidence as well as some of the other methods with higher PIECE scores (TS variants), but not as effective for methods such as IR and MIR, where it seems to hurt as much as help. I don't think that this disqualifies the paper, but a more thorough explanation and analysis of this would be appropriate.
There are two cases related to the question:
1. **ProCal improves PIECE but negatively impacts ACE/ECE**: we argue that the "cancellation effect" (see lines 170-181) is responsible for this. This effect emerges when positive calibration errors (i.e., acc - conf > 0) from high proximity samples and negative errors from low proximity samples offset each other within the same confidence group, causing ACE/ECE to potentially underestimate the calibration error. Binning-based methods are particularly prone to this because it shifts all samples in a bin to the same confidence, increasing the chances of the cancellation effect. Although ProCAL, which corrects each proximity bin independently, might increase ECE/ACE by disrupting this cancellation effect, a rise in ECE/ACE doesn't necessarily reflect overall calibration degradation.
2. **ProCal worsens PIECE**: note that IR/MIR are binning-based and hence Bin-Mean-Shift is used. We speculate that the deterioration in Bin-Mean-Shift stems from the inaccuracies of each bin's estimates. When bins have few samples, their accuracy estimates can vary significantly and hence unreliable, leading to this observed performance decline.
---
For questions pertaining to the computation of the bias index, we direct the reviewer to our **global response** above.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I thank the reviewers for the responses to my and other reviewers' questions and suggestions. It seems that on the whole the final paper will be quite similar to the version we reviewed with some additional clarifications.
I do think the question of domain shift could be better explored (or explained). For a concrete real-world example, to use the skin color/cancer detection example. If the samples in the training data are collected from majority light-skinned individuals, the ProCAL method would likely help adjust the confidences for the dark-skinned minority. However, if the training data are collected from exclusively light-skinned individuals, can the ProCAL method still be effective?
Overall, I keep my score, and thank the authors for an interesting paper.
---
Reply to Comment 1.1.1:
Title: Experiments show that ProCal effectively improves upon existing algorithms in the domain shift setting
Comment: Thank you for the prompt reply and the insightful question!
To explore the issue of domain shift, where test data has a shifted distribution not seen in training, we conduct experiments using **ImageNet (training set)** and **ImageNet-Sketch (test set)**. The datasets are chosen because all ImageNet images are real-world photos, while all ImageNet-Sketch images are sketches, collected using Google Search with class label keywords and "sketch", similar to the case of skin color example.
**Experiment Setup**: We employ a ResNet50 backbone from `timm`, trained on the ImageNet. Then we train our ProCal using the validation set from `ImageNet` and tested it on the 50,000 images from `ImageNet-Sketch`. The term `base` represents the corresponding method's outputs (e.g. Conf, TS, ETS), while "+ours" indicates the output confidence scores corrected using our approach.
| Method | ECE | ACE | MCE | PIECE |
| :--- | :--- | :--- | :--- | :--- |
| | base / +ours | base / +ours | base / +ours | base / +ours |
| Conf| 0.0871 / **0.0176** | 0.0869 / **0.0174** | 0.0365 / **0.0062** | 0.0885 / **0.0255** |
| TS | 0.0501 / **0.0180** | 0.0501 / **0.0170** | 0.0153 / **0.0025** | 0.0524 / **0.0223** |
| ETS | 0.0457 / **0.0192** | 0.0479 / **0.0192** | 0.0067 / **0.0024** | 0.0494 / **0.0221** |
| PTS | **0.0126** / 0.0129 | **0.0125** / 0.0132 | **0.0019** / 0.0023 | 0.0208 / **0.0188** |
| PTSK| **0.0094** / 0.0114 | **0.0096** / 0.0126 | **0.0011** / 0.0019 | 0.0275 / **0.0191** |
| MIR | 0.0194 / **0.0154** | 0.0193 / **0.0155** | 0.0032 / **0.0030** | 0.0243 / **0.0240** |
**Results**: As demonstrated in the table above, ProCal effectively improves upon existing algorithm in many cases. While we observe a slight increase in ECE/ACE/MCE when ProCal is paired with with PTS and PTSK, this is probably attributed to the original methods suffering from the "cancellation effect”, where positive and negative calibration errors within the same confidence bin cancel out each other (see responses to Q5 above; or lines 170-181). Under the PIECE metric that captures the cancellation effect, our method consistently outperforms all methods by large margins, and effectively mitigates their proximity bias. | Summary: This paper studies the prevalence of proximity bias in calibration, i.e. the rate of miscalibration on samples that are far away from their nearest neighbors in the data ("low proximity"). The authors empirically show that this type of miscalibration is present across many models, and propose a new post-training calibration procedure for mitigating it. Their approach shows significant empirical improvements over standard post-training calibration approaches.
Strengths: - **Originality:** The empirical investigation and proposed methodology in the paper is quite novel, as I am not aware of prior work that has studied this type of bias in calibration (although others have studied subgroup calibration).
- **Quality:** The claims in the paper are technically sound, and the experiments exploring proximity bias are extensive (more than 500 pretrained models considered, and various calibration baselines).
- **Clarity:** Overall, the paper is well-written and well-organized, with motivating experiments and intuitive definitions. However, I believe the presentation of the calibration algorithm in the paper could be improved (detailed further in weaknesses).
- **Significance:** The idea of low proximity samples introduced and analyzed by the authors seems quite significant, as these samples can correspond to underrepresented populations in the data.
Weaknesses: - **Algorithm Details:** The weakest part of the paper in my view is the lack of detail in Section 5.1. This part of the paper would be significantly improved by including something akin to the pseudo-code algorithm in the appendix. There are several questions that arise when reading this part: what does one do after estimating the posterior probability conditional on prediction and proximity (in the algorithm I can see that this is just the output on the test point)? Do we compute proximity only with respect to the test data? What type of KDE is used (i.e. kernel, bandwidth, etc.)?
- **Implementation Details:** In addition to algorithm details, some parts of the experimental setup could also be made clearer. How are ECE and ACE computed (i.e. binning scheme)? Do you set aside calibration data for the scaling methods (TS, ETS, PTS, PTSK) in addition to the set aside data for ProCal?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The main questions I have are detailed in weaknesses above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors appropriately discuss limitations, but it might be helpful to include some bits from the appendix (particularly regarding efficiency) in the main body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer 4gc8 for the constructive feedback and we are glad that the reviewer finds the investigation and methodology novel, the claim technically sound, the experiment extensive, and the proximity bias issue significant. Here we answer all the questions and hope they can address the concerns.
---
**W1: The weakest part of the paper in my view is the lack of detail in Section 5.1. This paper would be significantly improved by including something akin to the pseudo-code algorithm in the appendix.**
We appreciate your constructive feedback. We will incorporate the pseudo-code algorithms in Section G into the main paper in our forthcoming revision.
**Q1: what does one do after estimating the posterior probability conditional on prediction and proximity (in the algorithm I can see that this is just the output on the test point)?**
Thank you for the question. In fact, we directly treat the estimated posterior probability $\mathbb{P}(\hat{Y}=Y \mid \hat{P}, D)$ as the calibrated confidence. The rationale for this is:
1) The common interpretation of confidence is the probability of the prediction being correct given the sample X, and our posterior probability serves as an estimate of this $\mathbb{P}(\hat{Y}=Y \mid X)$ by using $\hat{P}, D$ as the proxy of sample X (line 206-210) ;
2) The commonly-used confidence is $\hat{P}$, while the posterior can be regarded as an updated or modified version of the initial $\hat{P}$.
**Q2: Do we compute proximity only with respect to the test data?**
If we understand the question correctly, the reviewers is asking (please let us know if you meant the question in a different way): *given a test sample, if the proximity is computed based on the test set*. To address this, we clarify that the proximity for any test sample is computed using the **validation set**. Here are the details:
1) During inference, when calibrating a test point, we only need to compute this point’s proximity by finding its $K$ nearest neighbors in the held-out **validation** set and no additional proximity computation is required.
2) When training the calibration approach (using the validation set for training), we calculate every point's proximity to others within the validation set to estimate their proximity values, which are then used for training the density estimators for Density-Ratio Calibration and the binning parameters in Bin-Mean-Shift. Note that each point is excluded from its own neighbor search.
**Q3: What type of KDE is used (i.e. kernel, bandwidth, etc.)?**
For its simplicity and effectiveness, we use the `KDEMultivariate` function from the `statsmodel` library for density estimation. This function employs a Gaussian Kernel and applies the normal reference rule of thumb (i.e. bw=$1.06\hat{\sigma} n^{-1 / 5}$) based on the the standard deviation $\hat{\sigma}$ and sample size $n$ to select an appropriate bandwidth. While it is possible to use other density estimation kernels such as `Exponential Kernel` in `Scikit Learn`, we found that the Gaussian kernel coupled with the normal reference rule for bandwidth selection generally yields better performance across various models and datasets.
**Q4: How are ECE and ACE computed (i.e. binning scheme)?**
We follow [1] to implement Expected Calibration Error (ECE) and Adaptive Calibration Error (ACE):
1) Firstly, we divide samples into 15 bins and compute every bin’s average confidence and accuracy.
2) Next, we compute the absolute difference between each bin's average confidence and its corresponding accuracy.
3) The final calibration error is measured using the weighted difference (the fraction of samples in each bin as the weight).
The key distinction between ECE and ACE lies in the binning scheme: ECE divides bins with **equal-confidence intervals** while ACE uses an adaptive scheme that spaces the bin intervals to contain an **equal number of samples** in each bin.
[1]Nixon J, Dusenberry MW, Zhang L, Jerfel G, Tran D. Measuring Calibration in Deep Learning. InCVPR workshops 2019 Jun 16 (Vol. 2, No. 7).
**Q5: Do you set aside calibration data for the scaling methods (TS, ETS, PTS, PTSK) in addition to the set aside data for ProCal?**
In our implementation, we **do not reserve additional data** specifically for ProCal. Instead, we employ the same validation set used to train other calibration methods (e.g. TS). We designed it this way considering the typical constraints of validation set data availability and the cost associated with acquiring extra data. However, if sufficient validation set points are available, we do recommend setting aside separate calibration data for ProCal in case that other calibration methods might overfit their validation sets.
---
Rebuttal Comment 1.1:
Comment: These clarifications are useful, thank you (it would also be useful to include some of these in the main body of the revision) - I keep my score. | Summary: The article focuses on the problem of uncertainty quantification in classification.
Calibration provides some guarantees on the estimated class probabilities on average. However, subgroups can still be miscalibrated. The article first aims to characterize these subgroup miscalibrations through proximity levels of the samples. It claims that a classifier, even calibrated, tends to be underconfident on high-proximity samples and overconfident on low-proximity samples. To measure this effect, it defines a proximity-informed ECE. Then, it proposes a recalibration framework based on this proximity-informed measure. Finally, it benchmarks the proposed method on numerous datasets and models.
Strengths: * The problem is well presented and motivated. Figure 1 is pedagogical and helps the comprehension of the problem.
* The idea of characterizing subgroup miscalibrations through proximity is interesting, and refining uncertainty estimates is a good direction.
* The proposed framework is versatile: it can work as a stand-alone or combined with standard calibration techniques. It provides two versions: binning-based and continuous.
* The experimental study is substantial.
* Datasets are large-scale, numerous, and multimodal: ImageNet, Yahoo-Topics, iNaturalist, ImageNet-LT, MultiNLI, ImageNet-C.
* The article studies numerous models, e.g. 504 pre-trained models on ImageNet.
* It compares many standard calibration methods, both scaling-based and histogram-based: temperature scaling, ensemble temperature scaling, parameterized temperature scaling, histogram binning, isotonic regression, and multi-isotonic regression.
* The experimental study provides substantial evidence.
* It reveals proximity bias in most of the 504 pre-trained networks on ImageNet (72% according to a Wilcoxon rank-sum test).
* The proposed method consistently improves over standard calibration methods.
* The time overhead of the method is small, with an increase of 1.17% in inference runtime.
* Completeness of the study: It reveals the proximity bias, proposes a metric to measure it, a recalibration procedure to address it, and substantial experiments showing consistent improvements.
Weaknesses: No major weaknesses.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * In equation (5) on the confidence score adjustments, why a weight hyperparameter $\lambda \in (0, 1]$ is needed? Don't we want to take $\lambda=1$ to minimize PIECE? The paper talks quickly about regularization. What is the tradeoff? Are results wrong with $\lambda$ too close to 1? Do you take $\lambda=0.5$ everywhere in the experiments? How to choose it?
* Concerning the time overhead introduced by the recalibration methods, isn't it weird that Bin-Mean-Shift has only twice the overhead of isotonic regression (+0.1s vs +0.05s)? Since isotonic regression just needs to apply the mapping function from [0, 1] to [0, 1] to the inferred sample, while Bin-Mean-Shift needs to compute the distance from the new sample to all samples from the calibration set. The same question applies to mapping-based recalibration methods such as histogram binning (+0.03s).
* Could you develop when to use the histogram-based version (section 5.2) and when to use the continuous version (section 5.1) of your method?
---
Typos:
* L148: it should be 80% instead of 85%.
* L162: repeated word Appendix.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discuss the following limitations:
* The proposed recalibration technique needs to maintain the calibration set during inference to compute the proximity of the new points.
* Maintaining the calibration set for inference may challenge the method when memory is limited.
* Focus limited to the closed-set multi-class classification problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer 1D75 for the constructive comments and we are glad that the reviewer finds the problem interesting, the solution a good direction, and the study complete. Here we answer all the questions and hope they can address the concerns.
**Q1: In equation (5) on the confidence score adjustments, why a weight hyperparameter \lambda is needed? Don't we want to take \lambda=1 to minimize PIECE? What is the tradeoff? Are results wrong with too close to 1? Do you take \lambda=0.5 everywhere in the experiments? How to choose it?**
To start, we revisit equation (5): $\hat{P}_{\text {ours }}=\hat{P}+\lambda \cdot\left(\mathcal{A}\left(B\_{m h}\right)-\mathcal{F}\left(B\_{m h}\right)\right)$.
First, we agree with you that setting $\lambda=1$ in Equation (5) would ideally calibrate the model and minimize PIECE more effectively. However, in practice, we often encounter bins with a smaller number of samples, whose estimate will have high variance. In such cases, the calculated $F(B_{mh}) -A(B_{mh})$ can be quite inaccurate as the estimate of the corresponding population quantity. To reduce variance in these scenarios, we introduce a **shrinkage coefficient, $\lambda$**. By setting a smaller $\lambda$, we can reduce the variance and make the final prediction more accurate at the expense of biased estimator.
Therefore, the use of $\lambda$ can be seen as a form of bias-variance trade-off. By adjusting the value of $\lambda$, we are able to control the balance between bias and variance in the model's output. In practice, we choose $\lambda=0.5$ as a reasonable default for all our experiments, which we find offers consistent performance across various settings.
**Q2: Why Bin-Mean-Shift has only twice the overhead of isotonic regression (+0.1s vs +0.05s)? Isotonic regression just needs to apply the mapping function from [0, 1] to [0, 1] to the inferred sample, while Bin-Mean-Shift needs to compute the distance from the new sample to all samples from the calibration set. The same question applies to mapping-based recalibration methods such as histogram binning (+0.03s).**
We attribute our efficiency to the recent advances in neighborhood search algorithms.
In our paper, we employ `indexFlatL2` from the Meta open-sourced GPU-accelerated Faiss library[2] to calculate each sample's nearest neighbor. This algorithm enables us to **reduce the time for nearest neighbor search to approximately 0.04 ms per sample** (shown below). The computation overhead beyond the neighbor search is actually quite similar to isotonic regression, which leads to the total time being roughly twice that of isotonic regression **(0.04 + 0.05 ≈ 0.1s)**.
For comparison, here's the inference time comparison (ms per sample) for our method and baseline methods on ImageNet (using a ViT/B-16@224px model on a single Nvidia GTX 2080 Ti, with all settings following those in our paper, and averaged across 10 runs):
| | NS |Conf | TS | HB | IR | PTS | PTSK |ETS| MIR| BIN* | DEN* |
|:---------------------:|:-----:|:---------:|:---------------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Time(ms) |0.04 |5.60| 5.60 | 5.63| 5.65 | 5.61 |5.66 |5.64| 5.84 | 5.70 | 6.29|
NS represents the time consumption for **neighbor search**. BIN* and DEN* are our proposed methods, where BIN denotes Bin Mean-Shift for discrete output and DEN is Density-Ratio Calibration for continuous output.
**Q3: Could you develop when to use the histogram-based version (section 5.2) and when to use the continuous version (section 5.1) of your method?**
Thanks for your constructive question. First, we would like to clarify that existing calibration algorithms can be classified into 2 types:
1) those producing **continuous outputs** such as softmax, temperature scaling (TS), and other methods (ETS, PTS, PTSK) which scale the input scores in a continuous way; and
2) those producing **discrete outputs**, particularly binning or isotonic regression methods (HB, IR, MIR) which group the samples and output the exact same scores for samples in the same group.
Our density-ratio calibration works by estimating continuous density functions, and is suited for the 1st case when the input confidence scores are continuous. In contrast, our bin-mean-shift method does not rely on densities and is more suited for utilizing outputs from the 2nd category of methods, i.e. discrete outputs. In combination, our 2 calibration approaches form a plug-and-play framework that is applicable to confidence scores of either type. We will revise the paper to clarify this.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed rebuttal.
Concerning the answer to Q1 on the choice of the $\lambda$ coefficient, according to the authors, it appears that $\lambda = 1$ can be problematic when there are too few samples in the bins. I wonder if this work could benefit from the derivations of [31] section C.5. They face the same problem when estimating the inter-region variance when too few samples are in the regions (see the $GL_{plugin}$ curve in Fig. 1.a. which is overestimated when the number of samples per region is low, versus the $GL_{LB}$ curve, i.e., the $GL_{plugin}$ curve corrected with debiasing, which is correctly estimated even when the number of samples per region is low). I would guess that a debiasing similar to the one of section C.5 [31] could be applied to this work, which could enable values of $\lambda$ closer or equal to 1, even when a few samples are in the bins.
Answer to Q2: the computational efficiency of the neighborhood search is quite impressive.
Answer to Q3: Thanks for clarifying and revising the paper.
[31] Alexandre Perez-Lebel, Marine Le Morvan, and Gaël Varoquaux. Beyond calibration: estimating the grouping loss of modern neural networks. In ICLR, 2023. | Summary: This work addresses the problem of proximity bias and confidence calibration by performing a comprehensive empirical study of various pretrained ImageNet models. The empirical findings provide insights on persistence of proximity bias even after performing calibration using existing post-hoc calibration algorithms. To mitiagte proximity bias and improve confidence calibration based on sample proximity, the paper proposes PROCAL algorithm that can be used as a plug-and-play method combined with existing calibration approaches. Further, proximity-informed expected calibration error metric is introduced to quantify the effectiveness of calibration algorithms in mitigating proximity bias.
Strengths: - A comprehensive study of pretrained ImageNet models involving various neural network architectures on their model calibration and proximity bias evaluation. The empirical study is performed on image classification tasks (under balanced, long-tail, and distribution-shift settings) and text classification tasks.
- Experimental evaluation if thorough using various calibration metrics.
- The paper provides many interesting observations from the empirical study related to proximity bias and model calibration, which is an important area of study under long-tailed data distribution settings.
Weaknesses: - The presentation of experimental results and writeup of Experiments Section 6 can be improvised, the findings for many of the questions are pointed to the Appendix without any brief details in the manuscript. I understand this is due to page limitation, but I would suggest the authors to focus on the results that are presented in main manuscript, or include the brief info in the manuscript at least.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Are the iNaturalist 2021 results in Table 1 based on transfer learning from pretrained ImageNet models, or directly evaluated on pretrained backbone?
- What are the criteria to choose between density-ratio calibration versus Bin-Mean-shift approach?
- typo in line#309 ImageNet-TL --> ImageNet-LT
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed the limitations of their work in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank reviewer Wiui for the constructive suggestions on the writing of the experiment part. We are also glad that the reviewer finds the phenomenon study comprehensive, the experiment evaluation thorough, the observation interesting, and the issue we identify to be important. Here we answer all the questions and hope they can address the concerns.
**Q1: Are the iNaturalist 2021 results in Table 1 based on transfer learning from pretrained ImageNet models, or directly evaluated on pretrained backbone?**
The results are directly evaluated on the **pretrained backbone** which we follow this paper[2] and download from the repo: https://github.com/visipedia/newt/blob/main/benchmark/README.md
[2] Van Horn G, Cole E, Beery S, Wilber K, Belongie S, Mac Aodha O. Benchmarking representation learning for natural world image collections. CVPR 2021.
**Q2: What are the criteria to choose between density-ratio calibration versus Bin-Mean-shift approach?**
Thanks for your constructive question. First, we would like to clarify that existing calibration algorithms can be classified into 2 types:
1) those producing *continuous outputs* such as softmax, temperature scaling (TS), and other methods (ETS, PTS, PTSK) which scale the input scores in a continuous way; and
2) those producing *discrete outputs*, particularly binning or isotonic regression methods (HB, IR, MIR) which group the samples and output the exact same scores for samples in the same group.
Our density-ratio calibration works by estimating continuous density functions, and is suited for the 1st case when the input confidence scores are continuous. In contrast, our bin-mean-shift method does not rely on densities and is more suited for utilizing outputs from the 2nd category of methods, i.e. discrete outputs. In combination, our 2 calibration approaches form a plug-and-play framework that is applicable to confidence scores of either type. We will revise the paper to clarify this.
**W1: The presentation of experimental results and writeup of Experiments Section 6 can be improved, the findings for many of the questions are pointed to the Appendix without any brief details in the manuscript. I understand this is due to page limitation, but I would suggest the authors focus on the results that are presented in the main manuscript, or include the brief info in the manuscript at least.**
We appreciate your suggestions, and we will include a brief summary of the results when making reference to the appendix to improve the readability and flow of the paper in the future versions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal. I had read the responses and I retain my score. | Rebuttal 1:
Rebuttal: **The following questions primarily pertain to the details of how to compute the bias index, and therefore, we have grouped them together for convenience and clarity.**
To provide further clarity, let's revisit the process of how we calculate the bias index:
1. To do the hypothesis testing, we first split the samples into 5 equal-sized proximity groups and select the highest and lowest proximity groups.
2. From the high proximity group, we randomly select 10,000 points and find corresponding points in the low proximity group that have similar confidence levels.
3. Next, we reverse this process to randomly select 10,000 points from the low proximity group and find corresponding points in the high proximity group with matched confidence.
4. We therefore merge all the points from the high proximity group into $B_H$ and those from the low proximity group into $B_L$, with the $B_H$ and $B_L$ having similar average confidence.
5. We apply the Wilcoxon rank-sum test to evaluate whether the mean difference in their accuracy between $B_H$ and $B_L$ is significantly different from zero.
**Q4.1: what is the sensitivity of two points having the 'same' confidence?**
If we understand correctly, the reviewer is asking whether we can ensure that two points have the same confidence. Firstly, we would like to clarify that *the two samples have the **same confidence*** actually means they have the *closest confidence in the validation set with the maximum difference less than 0.05*. In practice, we apply nearest neighbor search to find points in the alternate group with approximately the same confidence (i.e. closest possible confidence) to our target. Pairs with significant confidence difference (> 0.05) are excluded, ensuring that $B_H$ and $B_L$ have comparable confidence levels.
**Q4.2: Is there a large difference of average confidence between the two, and how well do their confidences intersect?**
If we understand correctly, 'the two' refers to the original groups with the highest and lowest proximities in step 1).
1. In this case, there is indeed an observable difference in their average confidence, with high proximity samples having higher average confidence. Therefore, we apply nearest neighbor search and reject policy to ensure the sampled $B_H$ and $B_L$ in step 5 have almost the same confidence.
2. As we only divide samples into 5 equally-sized groups, their confidences intersect well. When implementing this algorithm, users can also visualize the confidence distribution to verify whether overlaps occur. If their confidence levels have no overlap, we suggest reducing the number of splits from 5 to 3 to ensure low/high proximity groups have similar confidence but different proximities.
**Q4.3: How does the Bias Index changed if only B_H is used for sampling points and B_L is used for matching, and vice versa?**
Thank you for your question. As suggested in step 2 and 3, we do not just use the $B_H$ for sampling and $B_L$ for matching; both groups are used for sampling and matching.
In addition, as we highlighted in Q4.1, we employ the nearest neighbor search and reject strategy to ensure that $B_H$ and $B_L$ have similar average confidences. This means that the samples drawn primarily fall within the range of intersecting confidences, resulting in minimal changes to the bias index even if we randomly choose one group for sampling.
**Q4.4: What if the intermediate groups are used?**
We also conducted experiments in which we divided samples into 3 equal-sized proximity groups, which reduced the proximity difference between the high/low groups (can be seen as the proxy of intermediate groups). We found that the same observations hold true in this context. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time Path Tracing | Accept (poster) | Summary: Ray tracing faces the difficulties of being applied practical real-time applications due to high levels of noise when sample counts are low. Sample counts are often limited to as low as 4 when considering real-time applications (~30ms latency). As such, the paper proposes an end-to-end training of a RL-based sampling importance network, a latent space encoder network, and a denoiser network. As a result, the proposed framework achieves strong performance on several benchmarks when considering performance and latency trade off.
Strengths: - The paper is well-written and easy to follow
- The proposed framework demonstrates strong performance in terms of performance and latency trade off.
Weaknesses: - What is the exact novelty in comparison to previous works? The proposed framework seems to be combination of RL-based adaptive sampling [22], the use of sampling importance network [23], spatiotemporal reservoir (spatiotemporal latent space) [25,27], and the use of denoisers [10], except for minute details, such as not feeding the output of a denoiser to the sampling importance network?
- Without detailed discussions on the major differences between the proposed method and previous works and/or combination of previous works, it is difficult to assess the novelty and contribution of the proposed method.
- How does the memory consumption compare between the proposed method and previous works?
- Is the inference time in Table referring to the latency of the whole framework, including the inference of networks?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not include the limitation section. One limitation would be slightly higher inference time compared to previous works. Another possible limitation would be larger memory consumption, which cannot be said for certain until the authors provide comparisons.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Dear Reviewer YEyS, thank you for your time and interesting remarks.
> What is the exact novelty in comparison to previous works? The proposed framework seems to be combination of RL-based adaptive sampling [22], the use of sampling importance network [23], spatiotemporal reservoir (spatiotemporal latent space) [25,27], and the use of denoisers [10], except for minute details, such as not feeding the output of a denoiser to the sampling importance network? Without detailed discussions on the major differences between the proposed method and previous works and/or combination of previous works, it is difficult to assess the novelty and contribution of the proposed method.
We have 2 major novelties compared to previous work (cf lines 108-144):
1) Learned latent space representation: The only previous works using a latent space are ReSTIR and derivative works. Their latent space consists of storing a buffer of lights for every pixel. They update the buffer by sampling from new visible lights or reusing spatiotemporally nearby pixels, leveraging simple statistical heuristics (importance sampling). This latent space is directly used only for denoising.
Differently, our latent space consists of storing a learned encoding given all previously available information to guide future sampling and denoising. It is updated using a dedicated neural network given new information and the previous state.
2) RL-based learning of the sampling recommendation model:
The only related work using RL is [22], which is not suitable for real-time and does not deal with adaptive path tracing but with the incompatible task of incidence radiance field.
Previous work targeting sampling recommendation (DASR [20], NTAS [23]) used approximated gradients instead of RL. We avoided such an approximation and instead formulated an exact end-to-end optimization criterion using reinforcement learning. It also allowed us reaching unprecedented low sample counts and hence inference times as non-RL methods naturally cannot be scaled below 1spp as there is no longer a gradient when no samples are collected.
To identify the impact of each component, we perform an ablation study (Lines 277ff. and Table 1) showing that each contribution is essential to achieve the final performance.
> How does the memory consumption compare between the proposed method and previous works?
The memory usage from neural networks weights is minimal (<50MB). We store more input and state information than previous work: besides the sampled pixel values (24 channels; RGB values for up to 8 non-averaged samples) and 7 additional input channels (3 for surface normals, 3 for albedo, 1 for depth), we store 32 channels for the state (warped latent space)---a total of 53 channels more than previous work (using 7 channels for additional data and 3 channels for pixel values) or 212 Byte/pixel which amounts to 110 MB at 720x720 pixels. As working memory during inference, we can operate with a minimal buffer size as the high-resolution images can be tiled. With a total memory footprint in the order of 200 MB, the overhead is insignificant (<2%) compared to >12GB of textures loaded for current games on high-end GPUs where such capacity is available.
> Is the inference time in Table referring to the latency of the whole framework, including the inference of networks?
In Table 1, we report the execution time of the method without the cost of collecting ray tracing samples, i.e., the neural networks and warping for our method or the best implementation we could find of related work such as ReSTIR with Monte-Carlo path tracing setting the baseline at 0 inference time. We report these as they are largely independent of the scene. In Figure 3, we compare the latencies of the entire systems including networks and collecting the pixel samples, evaluated on the test datasets.
> The paper does not include the limitation section. One limitation would be slightly higher inference time compared to previous works. Another possible limitation would be larger memory consumption, which cannot be said for certain until the authors provide comparisons.
The main purpose of our paper is to deliver a better PSNR/inference-time trade-off. Our main results (Figure 3) show the relationship between frame-latency (which is the inference time of the whole framework including the pixel sampling) and the quality for our method (including variants) and relevant related work. Our method is Pareto optimal (always on the upper left), which means it has a lower inference time for equal visual quality compared to all other methods. Our method has a marginally larger memory consumption.
Based on the feedback of Reviewer WTeK, and if permitted, we will add a "limitation" section to the paper or at least to the appendix (see official comment).
Thank you for dedicating time to evaluate our paper and provide thoughtful feedback. We appreciate your acknowledgment of its strengths in soundness, presentation, and contribution. We understand that some reservations led to a borderline reject rating, and hope to have dispelled any misgivings through this rebuttal, demonstrating the merit of our work and addressing any lingering doubts. If there are still concerns, we'll promptly respond to further queries.
The authors
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for clarifications.
I have carefully read the rebuttal and acknowledged that the proposed approach demonstrates strong performance.
I will reconsider the rating after discussion with other reviewers. | Summary: This paper proposes two techniques to improve the performance of Monte-Carlo patch tracing on real-time image rendering: 1) keep all previously sampled values to improve spatial-temporal information reuse; 2) use reinforcement learning to optimize the sampling importance network, avoiding the explicit numerically approximated gradients.
Strengths: The problem is clearly illustrated and the motivation is easy to understand. The experiments are adequate to demonstrate the arguments of this paper.
Weaknesses: 1. Line71 describes some methods that improve the spatiotemporal reuse by storing not only the averaged pxiel values. But there are no further discussion about their difference to the proposed method.
2. Although a well-trained RL framework performs good during inference, the training of a RL framework is usually unstable. The authors are suggested to add more details about the training techniques.
------------Post Rebutal-----------
I have carefully read other reviewers' comments and the authors' responses. Although I think this paper proposed a good method for the defined problem, other reviewers argue that most of the techniques used in this paper have been discussed in previous papers, which heavily weakens the novelty of this paper. Since I am not familiar with this topic, I adjust my score for the concern of the novelty.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There are no potential negative societal impacts of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Dear Reviewer 1MpK, thank you for your time and interesting remarks.
> Line 71 describes some methods that improve the spatiotemporal reuse. But there are no further discussion about their difference to the proposed method.
The mentioned methods are ReSTIR and derivative work of ReSTIR. Those methods do not use neural networks and are very different from ours:
Their latent space consists of storing a buffer of lights for every pixel. They update the buffer by sampling from new visible lights or reusing spatiotemporally nearby pixels, leveraging simple statistical heuristics (importance sampling). This latent space is directly used only for denoising.
Differently, our latent space consists of storing a learned encoding given all previously available information to guide future sampling and denoising. It is updated using a dedicated neural network given new information and the previous state.
We further discuss their conceptual and qualitative differences to our method in Figure 3 and lines [108-123, 266-276, 302-307].
> Although a well-trained RL framework performs good during inference, the training of a RL framework is usually unstable. The authors are suggested to add more details about the training techniques.
RL training is indeed often unstable. This is linked to several factors such as delayed rewards, non-stationarity, and off-policy learning. In our case, these factors are not as present as in other applications: rewards are delayed for very few frames/time steps as opposed to robotics applications; non-stationarity is present through the latent space, however, an effective latent space representation (i.e., our only system state) is not required to produce a first meaningful output and thus allows for a smooth instead of a steep learning curve; off-policy learning is commonly used as on-policy learning requires too many expensive simulation steps, however, as our environment is not as complex to simulate (in the order of tens of milliseconds; producing over 1M results/output values per step), we can apply on-policy learning. We further train only the sampling importance network using RL to overcome the hard/non-differentiable decision on how many samples to collect and use standard backpropagation for all other components. For those reasons, our training was relatively stable and converged with insignificant variance to the results presented. Here are other training parameters we tuned: buffer size of size 100, kl_coeff of weight 5e-7, kl_target of weight 5e-8, lambda of weight 0.2, vf_loss_coeff of 0.5, and an entropy_coeff of 1e-5.
We hope that we provided satisfactory answers to your questions and comments, and that any remaining concerns could be addressed.
The authors | Summary: This paper tackles the issue of Monte-Carlo path tracing which is an important field for computer graphics and rendering. The paper first analyzes the current state of the art and identifies mainly two flaws which are addressed thereafter: First, the authors introduce a spatio temporal latent space serving as input to the denoising autoencoder which outputs the final image frames as well as (as a feedback) to the importance sampling network which outputs a distribution for rays per pixel. Second, as opposed to previous work, this distribution is not sampled from a sampling heatmap as in previous works but since this is problematic during training due to a coarse numerical estimation of the output gradient with respect to the gradient. Instead an RL based method for importance sampling is used during training which removes this need for a numerical estimation of the gradient. The resulting model seems to yield higher PSNR values than previous works. However I am very unfamiliar with this topic so I cannot really estimate how thorough the evaluation is done.
Strengths: * The paper seems to identify limitations and flaws in current methods which are then overcome by simple solutions which can furthermore be trained in combination in and end-to-end manner. However since I don't know all the related work, I could've missed sth.
* The proposed results seem to yield better results (measured by PSNR) at the cost of slightly higher inference times
Weaknesses: * For Table 1, there is no information regarding the dataset and resolution of the images/videos used to assess the presented values. This is confusing, because the PSNR values are shown for 4 spp, resulting in an inference time of 22.5 ms which is faster than base. However, in Fig. 3, the average inference time for the proposed method is much higher . Can the authors clarify this?
* The results are not presented as a function of image resolution (or number of pixels in the image). It would be interesting and important (e.g. to estimate the method's rea-world applicability) to report that, since nowadays most images/video to be rendered are of high spatial resolution.
* training datasets are nowhere listed? Is the method trained on all three presented datasets jointly or independently for each of them?
* Writing down the final objective function for the overall model would be good, since it would give the reader a final overview and summarize all the different aspects of the method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Is PSNR the only important metric in this field? SSIM should be a comparable measure in my experience. Why is SSIM not reported?
* How does the method generalize to unseen data (not from validation/test sets). Did you try to apply it to some other data?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: no limitations section is present. It would be imoportant to add such a section in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer WTeK, thank you for your time and interesting remarks.
> For Table 1, there is no information regarding the dataset and resolution of the images/videos used to assess the presented values. This is confusing, because the PSNR values are shown for 4 spp, resulting in an inference time of 22.5 ms. However, in Fig. 3, the average inference time for the proposed method is much higher.
In Table 1, we report the aggregate inference time of the neural networks or the method in general for related work. We exclude the pixel sampling time (thus also the 0.0 inference time for Monte-Carlo sampling). Differently, in Table 3, we report the entire frame latency of the whole frameworks including the pixel sampling, showing that our method is effectively faster. In terms of evaluation data, we use the procedure outlined in Section 3.1: Experimental Setup, i.e., the 4 mentioned scenes with leave-one-scene-out cross-validation.
> The results are not presented as a function of image resolution
We trained and evaluated using resolutions of 720x720 pixels (cf. Line 233), as in OIDN, and more than in other related work (DASR: 512x512). Our method scales linearly in terms of inference time with the number of pixels for higher resolutions. For high-resolution rendering, the outputs of ray-tracing methods such as ours as well as related work are commonly applied at a below-native resolution and upsampled/interpolated using methods such as Nvidia's DLSS or AMD's FidelityFX Super Resolution as a post-processing step. An accurate estimate for your preferred resolution can thus be obtained by scaling the reported time proportionally to the ratio between the desired pixel count (after considering the preferred upscaling factor) and $720^2$. The relative latency to related work will remain unchanged.
> Training datasets are nowhere listed? Is the method trained on all three presented datasets jointly or independently for each of them?
There are no commonly used datasets available for evaluating ray tracing methods. However, several scenes are available with licenses compatible to perform such research. We list the 4 scenes used in this work in Section 3.1 Experimental Setup (cf lines 211-213), and further release the code to reproduce the results in the supplementary materials. For the quantitative evaluations, we perform 4-fold cross-validation across the scenes (cf. lines 235-236), i.e., we pick one scene for testing and train on the remaining 3 scenes, then rotate to the next scenes for evaluation while averaging the results.
> Writing down the final objective function for the overall model would be good, since it would give the reader a final overview and summarize all the different aspects of the method.
If we understand the remark correctly, this is presented in Line 192.
> Is PSNR the only important metric in this field? SSIM should be a comparable measure in my experience. Why is SSIM not reported?
We agree that SSIM is the better metric as it is closer to visual perception, and thus we use its multi-scale variant in our loss function (Line 186). We report PSNR as the metric for comparability as all related works (DASR, NTAS, ReSTIR, OIDN, ...) evaluate with PSNR.
> How does the method generalize to unseen data (not from validation/test sets). Did you try to apply it to some other data?
The test data is unseen. While we would like to expand both the training and test sets, the available data with a license permitting research use is very limited and most related work trains and evaluates on non-public data, making the results non-reproducible. We aim to improve on this by releasing code of not just our own method but also re-implementations of related work, with consistent evaluations on the same data.
> No limitations section is present. It would be important to add such a section in my opinion.
If permitted, we will add the following section to the paper or at least to the appendix.
```
Limitations: The current method does not consider the application of after-effects such as motion blur, which could allow the reduction of samples collected on fast-moving objects, while in the current setting we would implicitly focus the ray tracing samples on such an object to minimize the error. Further, this method is applicable to entertainment applications and can potentially generate artifacts not present in the real scene, which could be an obstacle in application scenarios such as VR/AR medical devices. Additionally, it requires a system capable of performing both path tracing and DNN inference. While current graphics cards provide this capability and we assess the frame rendering time considering both components, the underlying workloads are significantly different: DNN inference is generally a very structured and high arithmetic intensity workload whereas path tracing is branching-intensive and performs random look-ups into memory. In future systems, it is conceivable that dedicated devices are used for each step, which would enforce a fixed capability for each type of compute and limit a free trade-off between the two as we make use in this work. Additionally, applying a DNN adds a memory overhead, although it can remain minimal compared to other components such as textures that commonly fill most the GPU's memory. Specifically, the model requires 110 MB to store the input and latent space data, <50 MB to store the model, and a few MB of working memory for intermediate feature maps that can be processed in tiles. This overhead is insignificant (<2%) compared to >12GB of textures loaded for current games on high-end GPUs where such capacity is available.
```
We appreciate your dedication to thoroughly review NeurIPS papers including ours. We hope to have cleared out all or most of your concerns and would be delighted to see a revision of the rating. If any questions or doubts remain, we will gladly answer any further questions.
The authors | Summary: This paper proposes to use reinforcement learning (RL) to improve adaptive sampling effectiveness in Monte Carlo ray tracing. Another key contribution they claim is the use of a latent space representation to encode temporal information, which improves the reuse of spatiotemporal pixel information across frames. The impact of this work is to improve quality of real-time path tracing results with respect to noise and temporal stability.
Strengths: The paper is well written, and provides a clear preliminary of the necessary background on adaptive sampling and denoising, as well as reviewing current SOTA approaches which are compared with their method. The usage of the "spatiotemporal" latent
space, which is claimed to encode temporal information more effectively than previous works, is an insightful idea. The RL-based, learned importance sampler is also an interesting approach that addresses limitations of prior works. Qualitatively and quantitatively, their method outperforms other methods by a significant margin.
Weaknesses: Evaluation of the method is limited to only a small number of scenes. It would be nice to see a larger scale evaluation, though it is understandable that training/evaluation data is laborious to assemble.
Possible lack of theoretical novelty -- the method is validated with empirical results which are quite promising, but there is little discussion or analysis of the stability / robustness of the method.
Additional images for qualitative comparison could be helpful for visualizing some of the key concepts discussed in the paper. For example, visualizing qualitative differences between various sample counts, or dedicated visualizations of the estimated sample maps and/or RL state.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Figure 2 and through the paper, some of the ground truth rendered images appear very dark. Is this intentional and if so, why?
How sensitive is the inference performance to the choice of training data? Are there any challenges generalizing to new rendered scenes?
This paper reviews the approach for approximating gradients from "Deep Adaptive Sampling for Low Sample Count Rendering (Kuznetsov et al.)", and proposes RL as an alternative to their formulation. Out of curiosity, would integrating a differentiable renderer (e.g. Mitsuba) into the pipeline from Kuznetsov et al. provide any useful additional gradients?
Could you elaborate on why RL specifically is a good learning paradigm for learning the importance sampling module? Does using RL
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper does not identify any limitations, and I do not see any clear limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer CLYz, thank you for your time and interesting remarks.
> Additional images for qualitative comparison could be helpful for visualizing some of the key concepts discussed in the paper. For example, visualizing qualitative differences between various sample counts, or dedicated visualizations of the estimated sample maps and/or RL state.
We provide qualitative comparisons and estimated sample maps for various sample counts in the supplementary material as well as some additional visualizations: state (stt.png), sample maps (hitmap.png, 540hitmap.png) and comparisons for various sample counts (0.1spp.gif, 1.0spp.gif, 4.0spp.gif) in the following anonymous link: https://drive.google.com/drive/folders/1zrKGelIwSxcGkww6u3-MYDRyVTO45SJ3?usp=sharing . We will include them all in the final supplementary materials.
> How sensitive is the inference performance to the choice of training data? Are there any challenges generalizing to new rendered scenes?
Our main results are obtained by performing a Leave One Scene Out Validation (See line 235). This means that given one test scene, we train on all scenes except the test scene, and then evaluate on the test scene. We repeat the same procedure such that every scene becomes the test scene once and finally average results. Our results show low sensitivity to the choice of training data (see appendix for individual Leave One Scene Out results) and good generalization to the unseen data from the test scene.
> In Figure 2 and through the paper, some of the ground truth rendered images appear very dark. Is this intentional and if so, why?
The ground truth images selected for visualization in the paper are not outliers in terms of brightness. We agree that two of the four images appear dark in the paper. However, zooming in on them in full-screen without any of the paper's white background visible as to mimick an in-game experience will change the perception to this being a rather "normal" brightness.
We do not feel comfortable performing range compression or otherwise modifying the data for an improved visual appearance of the ground truth frames in the figures in the paper.
> This paper reviews the approach for approximating gradients from "Deep Adaptive Sampling for Low Sample Count Rendering (Kuznetsov et al.)", and proposes RL as an alternative to their formulation. Out of curiosity, would integrating a differentiable renderer (e.g. Mitsuba) into the pipeline from Kuznetsov et al. provide any useful additional gradients?
While the renderer would become differentiable, the hard/non-differentiable decision on how many samples to collect would remain. It would open up the possibility to learn a continuous-valued sampling recommendation which could be quantized to integer values with a straight-through estimator, stochastic rounding, or similar way during the forward pass and inaccurate gradients in the backward pass. However, it is not immediately obvious how this would impact the results, i.e., if there would be a benefit to dropping the RL component in favor of inaccurate gradients (straight-through estimator, stochastic rounding). In any case, it would be impractical, as artists develop scenes for a specific game engine that is then the only program that can render the desired output, which generally does not support differentiable rendering.
> Could you elaborate on why RL specifically is a good learning paradigm for learning the importance sampling module?
Quantized problems, such as choosing an integer number of samples to collect per pixel, are commonly addressed by using hard decisions or stochastic rounding in the forward step and computing (very) approximate gradients for the backward pass (e.g., straight-through estimator). This negatively impacts the learning process and quality of results. We can avoid such an approximation and instead formulate an exact end-to-end optimization criterion using reinforcement learning. Using RL further allows collecting less than 1spp, which is a common limitation in related work that strongly impacts the capabilities in low-latency/real-time rendering scenarios. Permitting to collect 0 samples at some pixels with gradient-based learning implies that there is no longer a gradient to the sampling recommendation network and it can thus not be learned appropriately for this scenario.
We hope that we provided satisfactory answers to your questions and comments, and that any remaining concerns could be addressed.
The authors
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for answering my questions. I have read the rebuttal and acknowledge your clarifications.
I believe one of this paper's main contributions is the design of a system that produces results that are superior to other methods empirically. As such, my main remaining concern is the lack of more extensive experimental validation of the results. Some of the closely-related works that are mentioned evaluate their methods on a greater diversity of scenes. I have viewed all of the supplementary content, and believe that more extensive evaluation would make this work more convincing.
I will keep my rating the same for now. | Rebuttal 1:
Rebuttal: Based on the feedback of Reviewers WTeK and YEyS, and if permitted, we will add the following section to the paper or at least to the appendix. Please look at individual rebuttals for all other comments and answers.
```
Limitations: The current method does not consider the application of after-effects such as motion blur, which could allow the reduction of samples collected on fast-moving objects, while in the current setting we would implicitly focus the ray tracing samples on such an object to minimize the error. Further, this method is applicable to entertainment applications and can potentially generate artifacts not present in the real scene, which could be an obstacle in application scenarios such as VR/AR medical devices. Additionally, it requires a system capable of performing both path tracing and DNN inference. While current graphics cards provide this capability and we assess the frame rendering time considering both components, the underlying workloads are significantly different: DNN inference is generally a very structured and high arithmetic intensity workload whereas path tracing is branching-intensive and performs random look-ups into memory. As these are fundamentally different, we see dedicated ray tracing units on modern GPUs. In future systems, it is conceivable that dedicated devices are used for each step, which would enforce a fixed capability for each type of compute and limit a free trade-off between the two as we make use in this work. Additionally, applying a DNN adds a memory overhead, although it can remain minimal compared to other components such as textures that commonly fill most the GPU's memory. Specifically, the model requires 110 MB more memory to store the input and latent space data, <50 MB to store the model, and a few MB of working memory for intermediate feature maps that can be processed in tiles.
Memory overhead: We store the following data in memory. 1) the model weights (<50MB), 2) the sampled pixel values (24 channels; RGB values for up to 8 non-averaged samples), 3) 7 additional input channels (3 for surface normals, 3 for albedo, 1 for depth), 4) 32 channels for the state (warped latent space), and 5) a small working memory for intermediate feature data during inference. The components 2-4 add up to 53 more channels than previous works (which use 3 channels for input channels and 7 for additional data); hence 212 Byte/pixel; 110MB in total for 720x720 pixel data; and the required working memory is minimal as the inference can be done on tiles. With a total memory footprint in the order of 200 MB, the overhead is insignificant (<2%) compared to >12GB of textures loaded for current games on high-end GPUs where such capacity is available.
```
We hope that any remaining concerns could be addressed.
The authors | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Optimal Algorithms for the Inhomogeneous Spiked Wigner Model | Accept (poster) | Summary: This paper considers approximate message passing algorithms for reconstructing a rank-1 signal when corrupted by a symmetric matrix of noise with a block-variance structure; it is assumed the signal _x^*_ has iid coordinates generated from a prior distribution. One then forms the matrix $Y = x^* (x^*)^T/\sqrt{N} + A \odot \sqrt{\Delta}$, where $\Delta$ is a blockwise constant, positive matrix. The matrix $A$ is a symmetric gaussian matrix with iid off-diagonal entries of size $N \times N$.
The number of blocks $q$ appears to bounded independent of matrix size, and it is assumed one has access to the scale matrix Delta.
The first algorithm is a proper generalization of an AMP recursion for the case of a rank-1 signal with Wigner noise. The state evolution is shown to converge (Theorem 1.2 -- using a modification of existing techniques) and correspond to the solution of the Bayes optimal estimator (Theorem 1.4).
The spectral method designs a linear recurrence which is (locally? and conditionally?) optimal, in that it also recovers the same estimator by computing a principal eigenvector of an associated matrix.
Strengths: 1) The mathematical content is sound. The main content (theorem 1.2) is proven in the supplemental material. This adds an algorithmic aspect to a model which has attracted recent information theoretic attention.
2) The paper presents an algorithm which is optimal for the problem posed. The algorithm is part of a larger well-studied class.
3) The performance of the algorithm is illustrated numerically in a simple case.
4) A linear method is presented, which may reproduce the more complicated general AMP performance for the hidden spike.
Weaknesses: 0) The article does not provide any broader context for the technical results it develops; all but the first few pages are technicalities related to AMP theory. It is targeted at experts in approximate message passing, and it does not develop much of the information-theoretic aspects of the model (which I gather are proven in Guionnet et al). There is no conclusion. There is nothing in the way of practical considerations or relations to application (although I would say this alone is forgivable, if the paper were otherwise immaculate). Much of the main text is occupied by technicalities related to the formulation of the main theorem and in summoning relevant AMP theory from Javanmard and Montanari as well as Deshpande et al.
1) The main theorem (1.2) is an adaptation of an existing result. Moreover, approximate message passing algorithms are well studied and many theorems exist for them. The presence of non-iid noise makes it somewhat unique, but I think it is fair to say that this is not a big extension of existing theorems. (In particular, there is a change of variables to connect the homogeneous and inhomogeneous cases).
2) The spectral method, which is introduced, is largely left half-baked. There is a conjecture (1.6) related to it, and there is a 1 page description of how the method is developed. There is an equation (42) showing that the overlap evolution is unstable when a certain Perron-Frobenius eigenvalue is larger than 1. But finally, the main points here are left as conjectures.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Major:
0) Much of the valuable main paper space is used on technicalities of how one AMP is formally recast as another. What conclusions should be drawn from this? What is the context of the information-theoretic properties of the model? What is the surprise of the linear method (mentioned briefly in the third bullet at the start?). Lots of needed context is missing from this paper.
1) The spectral method proposed at the end would seem to suggest that there is, finally, no need for the AMP algorithm (spectral methods are wildly simpler). However the actual claims about what the spectral method does are left quite implicit (reading section 3). In simple language, what do you conjecture happens for the spectral method above/below what transition? A simple theorem in section 3 might be appropriate.
2) What is the recovery transition, and why should the spectral method be optimal? (or what more than optimism justifies Conjecture 1.6)? Good numerical results would be appropriate.
3) The version of the AMP algorithm that solves the inhomogeneous spiked Wigner problem is finally presented in (32). The assumptions that you formulate are not clearly directly relevant to this theorem (i.e. you could formulate your algorithm without all the heavy assumptions). This means that in some sense these assumptions are really technicalities suited for those who want AMP details and/or proof details. What are the minimal assumptions you need for actually solving the inhomogeneous spiked Wigner model?
4) A discussion of the extent to which it is possible to actually implement the AMP iteration for the posterior would add to the broader appeal.
Minor:
* What is t in Assumption 1.1 part 2. Does the family include all t? Also this sentence is malformed. What is the assumption?
* What is M in (4)?
* Eq (11) contains an $x_j$ as yet unused?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The assumptions are clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their insightful comments and valuable suggestions. We will incorporate all the suggestions into the final version, regardless of acceptance.
• Weakness 0: The main motivation of this paper was to approach the analysis of an inhomogeneous spiked Wigner model from an algorithmic point of view to complement the already extensive informational theoretical analysis of such models in Barbier and Reeves (2020); Behne and Reeves (2022); Alberici et al. (2022a,b); Guionnet et al. (2022). Through this process, we discovered that a linearization of the AMP algorithm implied that a simple spectral method applied on a transformed matrix will achieve weak recovery. The weak recovery transition of this spectral method is conjectured to coincide with the information theoretic transition for weak recovery.
• Weakness 1: While our proof uses a reduction to a known theorem, we fail to see how it is a weakness. By connecting with an existing framework, existing algorithms can also be simply extended to the inhomogeneous framework.
Additionally, the derivation of the inhomogeneous AMP in (3.5) is a technical non-trivial contribution as it involves a balanced normalization of the matrix by the variance, which is not an obvious choice. A more natural normalization by the standard deviation was proven to be suboptimal in Guionnet et al. (2022), and applying the classical AMP for the spiked Wigner model for this matrix led to very poor results. Recasting the AMP provided a simple proof in the end, but finding the correct embedding essentially required us to derive an AMP from scratch. Furthermore, local analysis of the inhomogeneous AMP led to an interesting novel conjecture in random matrix theory, which states that the spectral method applied to a transformed matrix in (2.5) achieves the BBP transition at the optimal SNR.
• Weakness 2: While the final result is left as a conjecture, it rests on solid and time-tested heuristic methods (see e.g. Maillard, Krzakala, Lu and Zdeborova [2021], or Venkataramanan, Kogler, and Mondelli [2022]) and is backed by numerical simulations.
A direct rigorous treatment of this spectral method is a hard open problem in random matrix theory. Tools have been developed for matrices with noise profiles, but very few explicit formulas exist in comparison to Wigner matrices. More intuitive guesses for an optimal matrix were proposed in Guionnet et al. [2022], but were proven to be incorrect. In general, characterizations of outlying eigenvalues are given implicitly for inhomogeneous matrices, so guessing the correct optimal matrix is completely non-trivial. The linearization of the fixed point of an AMP in this work led us to recover the optimal matrix (7). Our intuition gained from the AMP is a significant step towards a rigorous treatment of the spectral method, that will likely require a very fine analysis.
• Question 0: a) The model has been introduced (and its information-theoretic properties characterized) in Guionnet et al. [2022], Behne and Reeves [2022], Chen et al. [2021], Alberici et al [2022] (see line 34-36), motivated by the inhomogenous version of low-rank factorization problems (see line 14-24)
b) The surprise for the linear method lies in the fact that existing methods, such as PCA or reweighted PCA, fail in this and other inhomogeneous models, see e.g. Guionnet et al. [2022]. Finding a successful spectral method was an open question.
Our algorithmic point of view was used to tackle a peculiarity that was seen in the information theoretic recovery phase transitions. In Guionnet et al. [2022], an intuitive ”homogenization” of the noise resulted in a BBP phase transition at a non-optimal signal to noise ratio. We wanted to use approximate message passing and an analysis of a linearization around the fixed points of this AMP to find the correct way to transform the noise profile, which is defined on equation (7). Conjecture 1.6 on the optimality of the spectral method is also a nice conjecture in RMT that can be solved by studying a difficult system of quadratic vector equations. Our algorithmic point of view provided a nice conjecture that is supported by the stability and convergence of numerical simulations.
• Question 1: We agree that the spectral method proposed at the end is a remarkable approach, and certainly simpler than AMP. However, there is no way to guess the form of the operator without deriving AMP in the first place! The derivation of AMP and its linearization to obtain the spectral method is our contribution. The spectral method is conjectured to achieve weak recovery of the prior in the theoretical detectability regime. The full AMP algorithm is shown to approach the optimal MMSE.
• Question 2: In this model, the information theory analysis indicates that no recovery of the hidden truth is possible (no matter the algorithm) as long as SNR < 1. If SNR > 1, however, it is possible to recover a noisy version of the hidden truth. This is called ”weak recovery” in information theory, see Guionnet et al. [2022], Behne and Reeves [2022]. Our AMP is shown to be (asymptotically) able to perform weak recovery in practice (and in fact recover the MMSE estimator in linear time, a task that in principle requires sampling a high-dimensional distribution).
• Question 3: We present these conditions for the mathematically minded audience. In practice, it suffices to use either spectral or random initializations to achieve good results numerically.
• Question 4: AMP can be easily implemented in python, see for example the github repositories by Takashi Takahashi or Kuan Hsieh. Our AMP is very close to the spatially coupled AMP that is being used in error-correcting-codes, see e.g. Barbier, & Krzakala. (2017) or Barbier; Dia; Macris (2019).
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications.
I think I was unfairly pessimistic in my initial review score -- especially ignoring the role the AMP theorem played in deriving the spectral algorithm -- and I have increased the score to 6. The paper contains plenty for a NeurIPS publication: a theorem establishing a rigorous AMP convergence for a natural inferential problem and a derivative (in the non-mathematical sense) spectral algorithm which is clearly nontrivial and is a natural candidate for follow-up work. | Summary: This paper studies the (symmetric) rank-1 matrix estimation problem with inhomogeneous noise. Here inhomogeneous noise refers to a symmetric noise matrix that is block-wise constant where the number of blocks is a constant relative to the dimension.
This paper proposes an approximate message passing (AMP) algorithm and shows the corresponding state evolution result.
Another piece of contribution is the design of a spectral algorithm that outputs the principal eigenvector of a rescaled and recentered matrix.
Numerics suggest that this outperforms the naive estimator of the principal eigenvector of the data matrix per se.
Strengths: The most interesting (at least to me) part of the paper is Section 3 where a nonstandard spectral algorithm is introduced and analyzed to some extent.
As the authors commented, this estimator outperforms (at least numerically) the naive one corresponding to the original matrix Y.
Another satisfactory aspect of the result is the coincidence between the fixed point of AMP and that of the Bayes-optimal estimator (i.e., E[x^* | Y]), though this is not surprising.
Weaknesses: 1. The majority of the paper is devoted to AMP and its state evolution whose proof is a rather standard reduction to the matrix-valued AMP by Javanmard--Montanari. I didn't check the details carefully since everything goes as expected. But it's still good to see things written down formally.
2. Section 3 is interesting at a heuristic level. However, I have a doubt regarding the authors' claim.
In line 131, it is claimed that "we rigorously show that with SNR<1 our proposed spectral method fails to recover the signal". I don't think the analysis in Section 3 constitutes a proof of this claim.
It was shown that the trivial fixed point of an AMP with linear denoiser is attractive when SNR<1. I agree with this, but this does not imply that the asymptotic overlap of the spectral estimator is 0 when SNR<1.
The iterate of the linearized AMP converges (in constant number of steps) to the principal eigenvector (i.e., the spectral estimator) only when a spectral gap is present.
When SNR<1, there is no spectral gap and it is unclear how the iterate of linearized AMP is related to the principal eigenvector.
It may converge to some other vector, depending on the initialization.
In fact, rigorously speaking, I think it's fundamentally unlikely to prove subcritical behaviour by exploiting linearized AMP.
The analysis only proves the attraction of 0 when SNR<1, which is an *evidence* that the phase transition threshold is 1.
However, this implies neither "spectral fails when SNR<1" nor "spectral works when SNR>1".
3. If I understand correctly, the Delta matrix is assumed to be *known*. A very important aspect that was not discussed at all (correct me if I'm wrong) is what happens when Delta is unknown which appears (to me) to be a slightly more realistic assumption.
In that case the Bayes-AMP is no longer a practical algorithm (even with warm start) and the spectral algorithm is also not computable.
In fact, is it fair to say that the proposed spectral algorithm outperforms the naive one *because* it uses Delta information?
Can the price of lacking the knowledge of Delta be quantified?
I know this may go beyond the scope of the present paper.
But it seems to be an interesting nontrivial point that's worth mentioning/discussing.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Line 7, "info-theoretic optimal Bayes fixed-point equations" sounds like a confusing phrase. Consider expanding it as "the FP equations satisfied by the Bayes-optimal estimator".
1. Line 17, replace "etc..." with "etc."
2. Line 20 "block-constant Gaussian noise" is mildly confusing (sorry for being pedantic). The noise is never constant; consider saying "Gaussian noise with block-constant variance profile" or something like that.
3. Line 46, is the word "detect" properly used? Or do the authors mean "weak recovery"? My understanding is that they are not the same question.
3. I couldn't parse the grammar of 2. of Assumption 1.1.
5. L in the definition of PL functions seems to be a floating parameter. In standard theory, L is tied to the regularity of prior distributions and initializers. This needs to be quantified.
5. There are multiple mistakes in Remark 1.3. The correct statement should be "convergence under all L-PL functions is equivalent to convergence in Wasserstein-L". Also, bounds on k-th (k>2) moment is a *stronger* assumption than bounds on 2nd moment, consequently, convergence in W_k is *stronger* than W_2.
5. In Equation (6), has the notation E_{posterior} been defined? If not, then this notation is definitely too sloppy.
5. Line 137, there are redundant spaces in "N - vector".
5. Line 141, please explicitly write i in [N] when introducing the notation f_t^i.
5. Equation (33), I'm not familiar with the physics jargon "Nishimori identity" and didn't check Lelarge--Miolane, but isn't this just the law of total expectation (https://en.wikipedia.org/wiki/Law_of_total_expectation)?
5. In Section 3, an AMP with denoiser f_t(x) = x is considered. Why is identity denoiser a good thing to consider, besides its simplicity? I understand this comes from linearization. But for the sake of being self-contained, I suggest comment on this.
4. This paper handles heterogeneous noise. Is it true that with essentially no additional effort, heterogeneous prior can also be handled?
At least when the partition for the prior is the same as that for the noise?
I didn't check the details and may be wrong.
5. The number of blocks q is assumed to be a constant relative to N. What happens if it grows with N? Clearly if q=N nothing can be said. But this question makes sense for sufficiently slowly growing q. After the reduction to matrix-valued AMP, the SE result of Javanmard--Montanari is no longer applicable since they require the width of the matrix iterates to be a constant. Could the authors comment on what can potentially happen here?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their insightful comments and valuable suggestions. We will incorporate all the suggestions into the final version, regardless of acceptance.
Weakness 2: From the information theoretical analysis for these models, no recovery of the hidden truth is possible (no matter the algorithm) when the SNR $< 1$. We have shown that when the SNR $< 1$ any fixed point of the AMP recursion must have zero expected overlap with the true signal. One way of seeing it is that for SNR $< 1$ state evolution implies that any iteration of the AMP recursion contracts the vector overlap (it is being multiplied by a matrix with the operator norm smaller than 1), thus the only possible overlap of a fixed point is 0. Since any eigenvector of the matrix $\tilde{\boldsymbol{Y}}$ obtained by linearizing AMP is a fixed point of AMP, it must also have zero expected overlap with the signal. This subtlety should be further explained in Section 3.
We believe indeed that there are subtle points for a fully rigorous AMP proof of the spectral method performance. We have only shown that the linear AMP is uninformative in this regime, but the performance of the spectral method still remains open. We are confident, however, that we have recovered the right matrix to analyze for such a spectral approach because it is based on a mathematically solid framework (see e.g. Maillard, Krzakala, Lu and Zdeborova [2021], or Venkataramanan, Kogler, and Mondelli [2022]) and backed by numerical simulations. A rigorous argument would require fine control of the convergence of the linear AMP, see e.g. High Dimensional Robust M-Estimation: Asymptotic Variance via Approximate Message Passing by Montanari and Donoho (2013) or the more recent, Learning Gaussian Mixtures with Generalised Linear Models: Precise Asymptotics in High-dimensions by Loureiro et al.) Alternatively, a direct rigorous treatment of the spectral method directly from random matrix theory is a challenging but interesting open problem.
Weakness 3: Indeed the matrix $\tilde{\Delta}$ and the partition function $g$ are assumed to be known. Note, however, that this does not have to be a limitation.
In the degree-corrected block model Karrer and Newman (2011), for instance, the noise profile is perfectly known and given by the degrees of nodes in the graph. Similarly , for practical applications, one can empirically estimate the variance profile and assign group membership according to the empirical variances. We can first estimate both $\tilde{\Delta}$ and $g$ without knowing either of them in advance. Then we can work with the estimators rather than with the true quantities. An interesting direction for future work would be to see how AMP which uses estimators instead of the true $\tilde{\Delta}$ and $g$ performs and quantify how the lack of information impacts the performance of the algorithm, for instance using the Expectation Maximization framework in AMP (e.g. as in Expectation-Maximization Gaussian-Mixture Approximate Message Passing, Vila \& Schniter 2013).
Question 1: This indeed would be a better phrasing.
Question 2: Fixed.
Question 3: Gaussian noise with block-constant variance profile is a better term.
Question 4: Yes, detect in the context of line 46 refers to weak recovery
Question 5: Line 99 should be replaced by: "For each $t \in [N], a \in [q]$" $f_{t}^{a}$ and $(f_{t}^{a})^{\prime}$ are Lipschitz."
Question 6: Theorem 1 should be stated to apply to $2$--Pseudo Lipschitz functions, but can be generalized to a larger class of functions if we assume more moments on the prior. We have corrected this.
Question 7: Fixed.
Question 8: The notation $\mathbb{E}_{\text{posterior}}$ stands for the expected value with the respect to the posterior distribution.
Question 9: Fixed.
Question 10: We will add $i \in [N]$.
Question 11: Nishimori identity is indeed just a clever rewriting of the law of total expectation.
Question 12: We can gain some intuition motivating the choice $f_{t}(x) = x$ by considering simple priors, such as the Rademacher prior. In the case of the Rademacher prior a simple computation shows that the Bayes-optimal choice of the denoising functions yields $f_{t}^{j}(\star) = tanh(\star)$. In the first order approximation we have $tanh(x) \approx x$. Thus, at least for the Rademacher prior, the choice of identity functions as denoising functions corresponds to the first order approximation of the Bayes-optimal choice.
Question 13: Heterogeneous prior can be handled as long as the coordinates are independent and the partition for the prior corresponds to the block structure of the variance profile. Although for the Bayes-optimal choice of denoising functions $f_{t}^{j}$ in the AMP different priors would certainly impact the calculation of the posterior mean in the following definition (eq. 32 of the article) as long as the prior stays the same within a block, the denoising function $f_{t}^{j} = f_{t}^{g(j)}, t \in [N], j \in [q]$ also remains the same within a given block.
Question 14: This is a very interesting comment, but it is also unknown what happens for these models when the number of blocks varies depending on $N$ algorithmically nor from the information theoretic point of view. One obstacle is that the fixed point equations for the optimal estimators in the large $N$ limit depends on the $q$, so making sense of even classifying an optimal estimator is non-trivial in this setting. It remains a nice problem to explore in the future.
---
Rebuttal Comment 1.1:
Title: Thank you for the prompt and detailed reply
Comment: I thank the authors for their prompt and detailed reply (and revision of the manuscript which I didn't check).
"Weakness" 2: I agree with the authors' reply and I'm fine with heuristics (as long as it's made clear which is rigorous which is heuristic).
After all, these heuristics are likely to yield accurate predictions.
If space permitted, please consider making more clear in the manuscript the gap between attraction of trivial fixed point and the ineffectiveness of spectral estimator (as discussed in the reply).
Weakness 3: I agree that in some settings, Delta is naturally known.
Regarding the proposed strategy of first estimating \tilde{Delta} & partition then plugging them into AMP (or whatever subsequent procedure), is it obvious that in the proportional regime these can be estimated consistently? If not, then there is an asymptotic price to pay. Another point is that we observe Y which contains the signal. For the purpose of estimating the nuance parameters, the signal is an interference. One model that one can play with is that in addition to Y, we also observe Z which is pure noise (for the purpose of estimating tilde{Delta} and g).
A spiritually similar model is proposed and studied here: https://arxiv.org/abs/2211.00986.
In any case, this goes well beyond the scope of this paper.
My only point here is that the assumption that tilde{Delta} and g are known deserves a few sentences of remark.
Q12: This motivation makes sense. If space permitted, please consider adding a line or two discussing the choice of linear denoiser.
Q14: Thanks for the reply. I agree that this in general looks difficult.
Overall, my evaluation remains the same -- this paper is sufficiently interesting for NeurIPS. | Summary: The paper provides an analysis of an AMP algorithm for the spiked Wigner model with inhomogeneous noise. The paper builds on the matrix AMP framework to derive the state evolution equations for the considered AMP recursion for the studied model. The paper further shows that if the denoising functions are the Bayes one, then the fixed point equation of the state evolution of the AMP algorithm is the same as the one satisfied by the Bayes optimal estimator.
The paper also leverages the developed machinery to study the properties of a spectral algorithm which is motivated by considering the the identity denoising functions. It is conjectured that this spectral algorithm exhibits optimal phase transition.
Strengths: I find the paper to be generally well-written and it is not hard to follow. The problem that is considered is interesting and the presented results generalize previously known results for the spiked Wigner model with homogenous noise to the non-homogenous case.
Weaknesses: Minor comments:
- Page 1: Please state whether $\tilde{\Delta}$ and/or $g$ are assumed to be known. From Eq.(3) it seems that we do assume that $\Delta$ is known.
- Page 2, line 82: The notation for $f_t:\mathbb{R}^N\times \mathbb{N} \to\mathbb{R}^N$ is confusing/informal because from the displayed equation, it seems that $f_t$ takes input from $\mathbb{R}^N$. Is $t$ supposed to be the input integer in $\mathbb{N}$?
- Page 2, line 83: If $f_t^a$ are general Lipschitz functions, it is not clear to me why $f_t$ is linear.
- Page 3, line 112: It seems to me that if we replace the second moment assumption by finite $k$-th moment, we are strengthening the assumption, not weakening it.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In the abstract it is claimed that the spectral method is shown to match the information-theoretic transition. If I understand the arguments of the paper correctly, it seems that this is only a conjecture that is based on a heuristic, and no rigorous proof was given. Did I miss anything?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No concerns regarding potential societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We express our gratitude to the reviewer for their insightful comments and valuable suggestions. We will incorporate all the suggestions into the final version, regardless of acceptance.
Weakness, Page 1: Indeed the matrix $\tilde{\Delta}$ and the partition function $g$ are assumed to be known. Note however, that this does not have to be a limitation.
In the degree-corrected block model Karrer and Newman (2011), for instance, the noise profile is perfectly known and given by the degrees of nodes in the graph. Additionally, for practical applications, one can empirically estimate the variance profile and assign group membership according to the empirical variances. We can first estimate both $\tilde{\Delta}$ and $g$ without knowing either of them in advance. Then we can work with the estimators rather than with the true quantities. We did not explore this any further in this work. An interesting direction for future work would be to see how AMP which uses estimators instead of the true $\tilde{\Delta}$ and $g$ performs and quantify how the lack of information impacts the performance of the algorithm, for instance using the Expectation Maximization framework in AMP (e.g. as in Expectation-Maximization Gaussian-Mixture Approximate Message Passing, Vila \& Schniter 2013)
Weakness, Line 82: It is true that the definition is a bit confusing. We meant $t$ to be an integer and for each $t \in \mathbb{N}$ $f_{t}$ is a function $\mathbb{R}^{N} \rightarrow \mathbb{R}^{N}$.
Weakness, Line 83: The word linear is a mistake here and should be left out.
Weakness, Line 112: You are right, the right word is strengthen.
Question: This is indeed correct and we will clarify this. The method we use to show the weak recovery at the sharp transition used a heuristic argument from State Evolution/AMP theory. While the argument is mathematically solid (see e.g. Maillard, Krzakala, Lu and Zdeborova [2021], or Venkataramanan, Kogler, and Mondelli [2022]) and backed by numerical simulations, it is not fully rigorous. A direct rigorous treatment of the spectral method directly from random matrix theory is a challenging but interesting open problem in random matrix theory.
---
Rebuttal Comment 1.1:
Title: Reply to authors' rebuttal
Comment: I would like to thank the authors for their reply. My assessment of the paper remains positive. | Summary: This paper considers the spiked Wigner problem with inhomogeneous noise, i.e. the inverse problem of estimating a rank-matrix through an inhomogeneous noise channel. This problem naturally arises in many applications and a universality result makes the problem considered quite general with regards to the noise distribution.
The authors have made several contributions in this paper:
1. They have derived the AMP recursions to solve the spiked Wigner problem.
2. The most interesting property of AMP-like methods is that their behavior can be characterized exactly through a set of low-dimensional state evolution equations. This paper obtains the state evolution for the AMP recursion that solves the problem considered.
3. The authors analyze the AMP algorithm with identity denoisers and show that it corresponds to an spectral method for a specific matrix. More interestingly, the authors conjecture that this linear version of AMP detects a spike in the same region as the general AMP.
Strengths: - The paper considers an interesting problem with many applications.
- The authors derive an AMP method to solve this problem.
- The authors obtain the state evolution of this AMP recursion, thus fully characterizing the macroscopic behavior of the AMP recursion at **each** iteration of the algorithm and **not** just the fixed points. This result gives us a **theoretical** way of obtaining the estimation error of the AMP method using very general metrics
- The authors show that when the Bayes optimal denoisers are used (i.e. the mean of the posterior) the fixed point equations exactly match the Bayes optimal fixed point equation of another recent work.
- Finally, the authors analyze the linear version of the AMP algorithm and show it is equivalent to spectral method for a specific matrix.
To summarize, the authors look at the spiked Wigner problem with inhomogeneous noise, derive an AMP method to solve it, and fully characterize the theoretical behavior of the AMP method in a certain high-dimensional asymptotics. This is in contrast to many other methods that are used in practice, but have no theoretical guarantees or guarantees of the form of high-probability upper bounds on the error in certain metrics. AMP methods, allow us to obtain the **exact** error in many **different error metrics** in a certain **high-dimensional limit**.
Weaknesses: The weaknesses that come to my mind are mostly the usual weaknesses of the AMP algorithms:
- The AMP algorithms are often described as not very useful to solve problems in practice due to their instability often requiring a lot of tweaks such as damping to make them converge. I do not see much comments in this work regarding the stability of the AMP method described. This is in part due to generality of the algorithm (described using general denoisers) and in fact for the linear case the authors show a condition for convergence, however no mention of convergence or potential issues in the general case is mentioned. I should admit however that for example Bayes optimality of the AMP method with Bayes optimal denoisers makes this method interesting in practice for this problem.
- That being said, AMP having theoretical guarantees would still be very valuable as a theoretical tool. However, obtaining the errors through the state evolution equations are often very nontrivial due to the need to calculate expectations in a recursive formula that often need MCMC methods and are sometimes computationally not much better than running AMP for several instances of the problem and estimating the final error.
- As mentioned above, the AMP results only hold in a certain asymptotic regime. However in practice, very good match is observed even for moderately sized problems as the authors also mention.
## Minor comments
I believe the work is hard to follow for someone who is even familiar with the problem considered but not familiar with the AMP literature. It assumes the readers are knowledgeable in this area and very familiar with previous works which makes it hard to follow.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - It would be nice if more explanations are added regarding the convergence of the AMP algorithm, any issues it might have in practice and the usefulness of the state evolution equations as they are recursive equations that do not seem trivial to simulate. For example, the convergence of AMP, or uniqueness of its fixed points (if it is in fact unique) should either be shown or it should be clearly mentioned that these are open questions that need to be addressed in order for this AMP method to be fully characterized.
- In many AMP papers, the authors make a correspondence between the AMP algorithms and estimators that are more familiar to the community. For example, using certain (generalized) proximal operators as the denoisers, the AMP would then can be shown to have fixed points that are also critical points of a certain loss function, i.e. AMP would be doing a form of M-estimation. Similarly, one could make a correspondence between other Bayesian estimators such as MMSE estimator and denoisers that use the posterior distribution such as the Bayes optimal one also mentioned in the paper. Can such results be shown for the AMP algorithm considered here? For example showing that AMP with a certain denoiser is minimizing a certain loss function, etc.? Since people are much more familiar these optimization based estimators, it would make this paper a lot easier to understand and follow. Similarly for the Bayesian estimators which are trying to estimate the signal based on the posterior, such as MMSE estimator, it would be good to make the correspondence between AMP and such estimators more clear.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The practical limitations of the AMP algorithms and how easy/hard it is to use the state evolution in practice are not adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their insightful comments and valuable suggestions. We will incorporate all the suggestions into the final version, regardless of acceptance.
• Weaknesses: We acknowledge that we did not delve into potential stability issues or convergence problems that our AMP algorithm might encounter. Such challenges are indeed common to many similar algorithms and addressing them lies beyond the scope of our current work. Nevertheless, in our specific case, the AMP is Bayes-optimal, which typically exhibits strong convergence properties.
Additionally, while we recognize these concerns, they are precisely the reason we introduced the (AMP-inspired) spectral method in Section 3. This method can be formulated and solved using an off-the-shelf Python solver in just a few lines. We wish to highlight that this spectral algorithm achieves performance close to that of AMP without any associated algorithmic issues.
In response to the second point, we concur and plan to comment on it in an updated version.
• Questions: To begin with, our AMP does indeed provide the Bayes-optimal MMSE performance by computing the posterior mean. From the Bayesian statistics perspective, this is a classical estimator, and we will elucidate this in the revised version.
Furthermore, our AMP and its state evolution can be seamlessly adapted to examine M-estimators for this challenge, in the same vein as the standard AMP. Here, the denoiser will simply transition to the corresponding convex proximal operator. This is well-documented in sources such as "Generalized Approximate Message Passing for Estimation with Random Linear Mixing" by Rangan (2012) and "High Dimensional Robust M-Estimation: Asymptotic Variance via Approximate Message Passing" by Montanari and Donoho (2013). For M-estimation, our strategy in section 3 pertaining to the spectral method can be perceived in a similar light. We modify the AMP to develop a spectral method, which could also be interpreted as minimizing a tailored M-estimator. We will provide clarity on this aspect in the forthcoming version. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Hierarchical Multi-Agent Skill Discovery | Accept (poster) | Summary: This paper introduces a framework that concurrently learns the individual skills for each agent and the team skill for the entire team, amalgamating these skills to perform multi-agent tasks. The discovery of skills is grounded in a probabilistic graphical model and employs variational inference tools for scalable optimization, extending unsupervised skill discovery in single-agent RL. The proposed algorithm exhibits superior performance on sparse reward multi-agent benchmarks when compared to robust MARL baselines.
Strengths: (a) The paper is effectively articulated, with clear exposition of the intuitions underlying each aspect of the algorithm design.
(b) The empirical results provide substantial support for the paper's technical contributions.
(c) The extension of unsupervised skill discovery from single-agent to multi-agent RL is a considerable achievement, offering a promising direction for future research.
Weaknesses: (a) The algorithm might face limitations in practical applications due to: (1) the high-level policy necessitating input that comprises the global state and all agents' observations, and (2) the multi-agent options designed for the entire team, which lack flexibility as coordination among agents often manifests within sub-teams.
(b) The algorithm framework is complex, encompassing multiple components and hyperparameters. The fine-tuning efforts, particularly for coordinating the training of neural networks across different scenarios, could be resource-intensive.
(c) The paper omits important details in certain areas and requires more clarity. My queries and suggestions follow.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (a) The statement in Line 40, "One is to let all agents learn a shared team skill [9, 10], which promotes team cooperation behaviors but suffers from high complexity," lacks clarity. Could you expand on why the complexity can be high? Since your work also aims to discover multi-agent skills, how have you addressed this complexity issue compared to [9, 10]?
(b) There are additional works on multi-agent option (also known as skill) discovery, such as [1]. Please consider offering a comprehensive review of this research field to highlight your novel contributions.
(c) Based on the definition in Line 144, it seems that the agents in the environment must be homogeneous as they share the same individual skill space. Is that correct?
(d) In Line 197, it's unclear how the protocol design can prevent skill duplication.
(e) It's unclear how you can maximize the lower bound in Eq. 3 with the objectives in Eq. 4-8, given that several coefficients are introduced and the optimization of functions are separated.
(f) For fair comparisons, MAPPO, MAT, and MASER should also include the global state as part of the input. Is that the case in your study?
(h) The number of team skills and individual skills are crucial hyperparameters and should be explicitly stated in the main paper.
[1] Chen, Jiayu, Jingdi Chen, Tian Lan, and Vaneet Aggarwal. "Scalable multi-agent covering option discovery based on Kronecker graphs." Advances in Neural Information Processing Systems 35 (2022): 30406-30418.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation part is not included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments. We hope we can address your concerns below.
**Q1**: Weaknesses (a) in the Official Review.
**A1**: Thank you for pointing out our potential limitations. For (1), as mentioned in lines 252-258, HMASD performs only one timestep of centralized execution in every $k$ timesteps. Such a small number of centralized information introduced can coordinate agents better from a global view compared to the fully decentralized method. And in many applications, it allows such a small amount of centralized execution. For (2), our team skills are for the entire team and expected to explore different global state spaces. it will be more difficult to learn team skills for a sub-team. How to divide the team into sub-team and how to learn team skills for sub-teams with variable number of agents. We think this could be an important direction in our future research.
**Q2**: Weaknesses (b) in the Official Review.
**A2**: Yes, in the early version of HMASD, we found that HMASD didn't work at all. We have iterated 28 versions to achieve the performance in the paper, including the problem formulation of multi-agent skill discovery, the way of training skills, the model structure, the hyperparameters, etc. Our future work aims to make HMASD more practical at solving sparse-reward multi-agent tasks.
**Q3**: The statement in Line 40 lacks clarity. Could you expand on why the complexity can be high? Since your work also aims to discover multi-agent skills, how have you addressed this complexity issue compared to [9, 10]?
**A3**: The joint state and action spaces of multi-agent tasks increase exponentially with the number of agents, so directly let all agents learn joint behavior to form team skills is exponential complexity. Our method reduces this complexity by decomposing the team skill into different individual skills (easier to learn) and guiding the joint behavior of all agents to form the team skill.
**Q4**: There are additional works on multi-agent option (also known as skill) discovery, such as [1]. Please consider offering a comprehensive review of this research field to highlight your novel contributions.
**A4**: Thanks for your suggestion. We will conduct a wider review of this research field in our next revision.
**Q5**: it seems that the agents in the environment must be homogeneous as they share the same individual skill space. Is that correct?
**A5**: No, our method can also deal with heterogeneous agents. Although all agents share a same set of individual skills, the set of individual skills can be seen as the union of all agents' individual skills. An agent is not required to use all individual skills in the individual skill space.
**Q6**: In Line 197, it's unclear how the protocol design can prevent skill duplication.
**A6**: In some multi-agent tasks, it has the need for assigning different skills to agents with similar observations. In muti-agent skill learning methods like [1] and [2], each agent selects skills only based on its local observation, leading to potential skill duplication between agents with similar observations. In our method, agents select skills sequentially. When an agent selects the skill, it will know all previous agents' selected skills, which can prevent skill duplication between agents with similar observations.
**Q7**: It's unclear how you can maximize the lower bound in Eq. 3 with the objectives in Eq. 4-8.
**A7**: The lower bound in Eq. 3 has four terms, i.e, team reward, diversity term, skill entropy term and action entropy term. We maximize these four terms with different components in HMASD. We maximize the team reward by using it as the extrinsic reward for both the high-level policy and low-level policy. We maximize the diversity term by using it as the intrinsic reward for the low-level policy. The skill entropy term and action entropy term are maximized by optimizing the entropy of high-level policy and low-level policy, respectively. We also introduce several coefficients to balance the optimization weights of the four terms.
**Q8**: For fair comparisons, MAPPO, MAT, and MASER should also include the global state as part of the input. Is that the case in your study?
**A8**: We used the global state for MAPPO and MASER. Due to the problem formulation of MAT, we can't find a place to use global state in MAT. Besides, even if the environments don't provide global state, we can concentrate all agents' observations as the global state, which is the way we used in Overcooked and it can also get good performance.
**Q9**: The number of team skills and individual skills are crucial hyperparameters and should be explicitly stated in the main paper.
**A9**: Due to page limit, we list the number of skills on all scenarios in Table 3 of Appendix F. We will consider putting it in the main paper in our next revision.
**Q10**: The limitation part is not included.
**A10**: HMASD has two main limitations. Firstly, as mentioned in lines 320-321, when the state-observation space is large, HMASD can discover diverse skills but maybe only some of them (about $24$% on SMAC) are useful for the team reward. There are a large percentage of skills are assigned to explore those zero-reward states. Secondly, as mentioned in line 52 of Appendix E, the number of team skills and individual skills should be carefully adjusted. When applying HMASD to a new task, it require to adjust several hyperparameters as shown in Table 3 of Appendix F. Our future work aims to improve these limitations and make HMASD better at solving sparse-reward multi-agent tasks.
Finally, thank you again for your recognition and insightful review to our work. We will incorporate your suggestions into our next revision.
##### Reference
[1] J. Yang, I, et al. Hierarchical cooperative multi-agent reinforcement learning with skill discovery.
[2] Y. Liu, et al. Heterogeneous skill learning for multi-agent tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal. This has resolved most of my concerns. I look forward to seeing improvements related to Weakness (a) in the future version of this paper. I will maintain my current score. | Summary: The paper proposes a two level hierarchical model for cooperative multi-agent RL. The key idea is to use variational inference based skill discovery over joint and individual policies. Intuitively, the objective can described as follows: i) find individual options, that are diverse (in terms of state visitations), ii) find joint options that are diverse (in terms of joint state visitation) iii) maximise the reward.
The experimental results on several cooperative domains are presented and the method performs better than well established baselines.
Strengths: - The paper presents a technically solid, novel algorithm.
- The experiments are convincing and demonstrate that the method indeed discovers helpful joint and individual skills and combines them to into a reward-maximising policy
Weaknesses: - My main concern is that most of the improvement comes from implicit exploration bonus that arises from skill discovery objective, rather than from decomposition of the main task into subs tasks. It would be more convincing to have some sort of exploration bonus baseline. For example, adding a reward for individuals visiting new states and population visiting new joint states. For example, one could derive them via "Exploration by random network distillation" (https://arxiv.org/abs/1810.12894) method, one RND trained on individual states and another on the joint and summing up the reward.
- There is no discussion on limitations.
- Minor issue. The notation is a bit overloaded and makes things slightly confusing:
- line 143-145. $Z \in \mathcal{Z}$ and $z \in \mathcal{X}$ is confusing
- line 157-159. p and q seem to be referring to two distributions each
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would like the authors to address the comment above on the (joint) exploration baseline.
I would also like the authors to explicitly discuss the limitations of the method.
I am willing to increase my score based on the answers.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no discussion on limitations, which I would encourage the authors to write.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments. We hope we can address your concerns below.
**Q1**: My main concern is that most of the improvement comes from implicit exploration bonus that arises from skill discovery objective, rather than from decomposition of the main task into subs tasks. It would be more convincing to have some sort of exploration bonus baseline.
**A1**: Thanks for your suggestion. Here, we compare HMASD with an exploration bonus MARL baseline, EITI/EDTI[1], on Overcooked. EITI/EDTI proposes exploration strategies where agents start with decentralized exploration driven by their individual curiosity (i.e., individual exploration), and are also encouraged to coordinate their exploration (i.e., joint exploration). The final reward for each agent is the sum of team reward, individual exploration bonus and joint exploration bonus. EITI and EDTI provide 2 ways to calculate the joint exploration bonus. The comparison of HMASD with EITI/EDTI on Overcooked is shown in Table 1.
Table1: The final performance of episode reward on Overcooked.
| Scenario | EITI | EDTI | HMASD |
| :------------------- | :--------------- | :--------------- | :-------------- |
| cramped_room | 0.122 $\pm$ 0.013 | 0.119 $\pm$ 0.015 | 236.0 $\pm$ 8.0 |
| asymmetric_advantages | 0.008 $\pm$ 0.002 | 0.004 $\pm$ 0.002 | 295.1 $\pm$ 84.5 |
| coordination_ring | 0.0 $\pm$ 0.0 | 0.0 $\pm$ 0.0 | 80.0 $\pm$ 97.9 |
We can see that HMASD outperforms EITI and EDTI on all Overcooked scenarios. Although EITI/EDTI encourages both individual exploration and joint exploration, it doesn't formulate the relationship between individual exploration and joint exploration. These two explorations may have conflicts, causing neither exploration to work well. In our work, we propose to discover both team and individual skills for solving sparse-reward multi-agent tasks. Importantly, we build a probabilistic graphical model to formulate the relationship between team skill and individual skill, and then derive a lower bound as our objective. The results show that our method could discover significant team and individual skills, and effectively combine them to accomplish the sparse-reward multi-agent task.
**Q2**: There is no discussion on limitations.
**A2**: HMASD has two main limitations. Firstly, as mentioned in lines 320-321, when the state-observation space is large, HMASD can discover diverse skills but maybe only some of them (about $24$% on SMAC) are useful for the team reward. There are a large percentage of skills are assigned to explore those zero-reward states. Secondly, as mentioned in line 52 of Appendix E, the number of team skills and individual skills should be carefully adjusted. When applying HMASD to a new task, it require to adjust several hyperparameters as shown in Table 3 of Appendix F. Our future work aims to improve these limitations and make HMASD better at solving sparse-reward multi-agent tasks.
**Q3**: Minor issue. The notation is a bit overloaded and makes things slightly confusing: line 143-145. $Z \in \mathcal{Z}$ and $z^i \in \mathcal{X}$ is confusing; line 157-159. p and q seem to be referring to two distributions each
**A3**: (1) $\mathcal{Z}$ is the team skill space and $\mathcal{X}$ is the individual skill space. The team skill space is for the whole team, and the individual skill space is for each individual agent. All agents share a same individual skill space $\mathcal{X}$. (2) In this paper, we adopt the structured variational inference to derive the lower bound. In structured variational inference, we aim to approximate some distribution $p(y)$ with another, potentially simpler distribution $q(y)$. Typically, $q(y)$ is taken to be some tractable factorized distribution. So $p$ is the true distribution and $q$ is the approximate distribution for $p$.
Finally, thank you again for your thoughtful comments. We will incorporate your suggestions into our next revision. If some of your concerns are addressed, you could consider raising the rating. This is very important for us and we will appreciate it very much.
##### Reference
[1] Wang, T, et al. Influence-Based Multi-Agent Exploration. ICLR 2020. | Summary: This paper proposed HMASD, a two-level hierarchical algorithm for discovering both team and individual skills in MARL. The high-level policy based on the transformer structure generates team skills and individual skills in an autoregressive manner, and the low-level policies output primitive actions according to individual skills and local observations. The authors formulate multi-agent skill discovery as an inference problem by augmenting the basic probabilistic graphical model. Experimental results show that HMASD can outperform other baselines in sparse reward multi-agent tasks.
Strengths: 1. The paper is well-organized and well-motivated. The authors explain their formalism extremely well throughout, including in their methods section.
2. The authors design a toy game *Alice_and_Bob* to demonstrate how their method works, which improves the soundness of their method.
3. Some MARL work related to skill discovery or exploration is fully mentioned in the appendix.
4. The authors conduct solid experiments in some popular benchmarks, and carry out sufficient ablation experiments.
Weaknesses: 1. Some important baselines are missing in the experiment section, such as HSD and CMAE.
2. Some curves are stopped while learning does not seem to have converged in Figure 6 & 7.
3. Due to the introduction of more hyperparameters (8 new hyperparameters can be seen from Table 3), HMASD needs more hyperparameter tuning.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Shouldn't the distribution over $\mathcal{O}$ be proportional to $\exp(r(s,\boldsymbol{a}))$ in L121?
2. Would the initialization method of the arbitrary symbol $Z_0$ affect the performance of HMASD?
3. Is the build order of the individual skills preset?
4. In the *Alice_and_Bob* game, does $Z=1$ always correspond to the team skill collecting the red diamond?
5. Can HMASD be applied to value-based multi-agent reinforcement learning algorithms such as QMIX?
6. How does HMASD perform compared to CMAE and EITI/EDTI [1]?
**Reference**
[1] Wang, Tonghan et al. Influence-Based Multi-Agent Exploration. 2019.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors state that the limit of their method is the number of team skills and individual skills should be carefully adjusted. That is to say, due to the introduction of more hyperparameters, HMASD needs more hyperparameter tuning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments. We hope we can address your concerns below.
**Q1**: Some important baselines are missing in the experiment section, such as HSD and CMAE.
**A1**: HSD is an old method proposed in 2019. It performs poorly even on the dense reward SMAC as shown in [1]. In this paper, we select most recent related works as our baselines, including MAT(NeurIPS 2022),MAPPO(NeurIPS 2022),MASER(ICML 2022). Besides, HSD is a hierarchical MARL method that only learns individual skills, which has similar idea with HMASD\_NoTeam in our ablations. We have compared HMASD with HMASD\_NoTeam to verify the importance of learning team skills. So we don't use HSD as our baseline. CMAE is a solid method but has complex implementation details. We don't choose CMAE as a baseline because CMAE doesn't provide the code running on SMAC. There are someone that raised an issue about the SMAC running code on its github repository, but the author did not reply.
**Q2**: Some curves are stopped while learning does not seem to have converged in Figure 6 & 7.
**A2**: Yes, we stop the training when HMASD converges. Overall, HMASD can achieve faster convergence and higher average performance than baselines.
**Q3**: Due to the introduction of more hyperparameters (8 new hyperparameters can be seen from Table 3), HMASD needs more hyperparameter tuning.
**A3**: Yes, this is one limitation of HMASD. We mainly adjust hyperparameters through grid-search. We don't directly perform grid-search on all 8 hyperparameters. We divide the 8 hyperparameters into 3 groups, i.e., $(k, n_Z, n_z), (\lambda_h, \lambda_l), (\lambda_e, \lambda_D, \lambda_d)$. We first perform grid-search on $(k, n_Z, n_z)$ and fix them. The next grid-search is on $(\lambda_e, \lambda_D, \lambda_d)$ and the last is on $(\lambda_h, \lambda_l)$. Although the final hyperparameters obtained in this way may be not the best, it can greatly reduce tuning time.
**Q4**: Shouldn't the distribution over $\mathcal{O}$ be proportional to exp($r(s,\boldsymbol{a})$) in L121?
**A4**: We follow the Eq. (3) in [2] to define the distribution over $\mathcal{O}$. Even if the distribution over $\mathcal{O}$ is proportional to exp($r(s,\boldsymbol{a})$), it just adds a constant to the optimized lower bound in Eq. (3), which has no effect on our method.
**Q5**: Would the initialization method of the arbitrary symbol $Z_0$ affect the performance of HMASD?
**A5**: Following MAT, we set the first symbol of the decoder (i.e., $Z_0$) to a fixed vector. We would like to explore the effect of $Z_0$ in the future.
**Q6**: Is the build order of the individual skills preset?
**A6**: No, all individual skills are learned equivalently, not learned in a specific order. The learning of individual skills for each agent depends on the skill assignment of the high-level skill coordinator. And we train skill coordinator, discover, discriminator simultaneously.
**Q7**: In the *Alice_and_Bob* game, does $Z=1$ always correspond to the team skill collecting the red diamond?
**A7**: No, HMASD can discover diverse team skills in different runs on *Alice_and_Bob*. We choose one of the runs that have well-explainable behaviors to visualize the skills.
**Q8**: Can HMASD be applied to value-based multi-agent reinforcement learning algorithms such as QMIX?
**A8**: No, it can't. From the derived lower bound in Eq. (3), we can see that the skill entropy term and the action entropy term are related to the entropy of high-level policy and low-level policy. So HMASD can only be applied to policy-based MARL.
**Q9**: How does HMASD perform compared to CMAE and EITI/EDTI?
**A9**: No comparison with CMAE has been explained in A1. Here, we compare HMASD with EITI/EDTI on Overcooked. EITI/EDTI proposes exploration strategies where agents start with decentralized exploration driven by their individual curiosity (i.e., individual exploration), and are also encouraged to coordinate their exploration (i.e., joint exploration). The final reward for each agent is the sum of team reward, individual exploration bonus and joint exploration bonus. EITI and EDTI provide 2 ways to calculate the joint exploration bonus. The comparison of HMASD with EITI/EDTI on Overcooked is shown in Table 1.
Table1: The final performance of episode reward on Overcooked.
|Scenario| EITI | EDTI | HMASD |
| :------------------- | :--------------- | :--------------- | :-------------- |
| cramped_room | 0.122 $\pm$ 0.013 | 0.119 $\pm$ 0.015 | 236.0 $\pm$ 8.0 |
| asymmetric_advantages | 0.008 $\pm$ 0.002 | 0.004 $\pm$ 0.002 | 295.1 $\pm$ 84.5 |
| coordination_ring | 0.0 $\pm$ 0.0 | 0.0 $\pm$ 0.0 | 80.0 $\pm$ 97.9 |
We can see that HMASD outperforms EITI and EDTI on all Overcooked scenarios. Although EITI/EDTI encourages both individual exploration and joint exploration, it doesn't formulate the relationship between individual exploration and joint exploration. These two explorations may have conflicts, causing neither exploration to work well. In our work, we propose to discover both team and individual skills for solving sparse-reward multi-agent tasks. Importantly, we build a probabilistic graphical model to formulate the relationship between team skill and individual skill, and then derive a lower bound as our objective. The results show that our method could discover significant team and individual skills, and effectively combine them to accomplish the sparse-reward multi-agent task.
Finally, thank you again for your thoughtful comments. We will incorporate your suggestions into our next revision. If some of your concerns are addressed, you could consider raising the rating. This is very important for us and we will appreciate it very much.
Reference
[1] Wang T, et al. Rode: Learning roles to decompose multi-agent tasks.
[2] Levine S. Reinforcement learning and control as probabilistic inference: Tutorial and review.
---
Rebuttal Comment 1.1:
Title: Thank you for covering my questions.
Comment: I appreciate the effort the authors have put into addressing my concerns. Their responses have effectively alleviated the concerns I had initially. As a result, I have chosen to revise my ratings for this paper. Thank you for the thorough and satisfactory replies. | Summary: This paper focuses on applying unsupervised skill learning to multi-agent reinforcement learning. For this purpose, the authors proposed a two-level hierarchical algorithm for discovering both team and individual skills in MARL, where individual skills refers to the abilities of individual agents and team skills refer to the ability of agents to work together as a whole. To this end, they embed the multi-agent skill discovery problem into a probabilistic graphical model and formulate it as an inference problem. Finally, they show that the proposed method achieves superior performance on sparse reward MARL benchmarks.
Strengths: 1. The problem this paper considers is rather important and it is a promising way to learn a set of skills and combine them properly to tackle complex tasks.
2. The literature review is sufficient in Appendix B.
3. The proposed method of decomposing the team skill into different individual skills for agents and ensuring that the joint behavior of all agents can form the team tactic is well-motivated with the football example.
4. The proposed method is novel as this work is the first attempt to model both team skills and individual skills with the probabilistic graphical model in MARL.
5. The empirical evaluation, especially Figure 4, is of high quality and quite interesting.
6. The results on SMAC with sparse rewards and Overcooked are significant.
7. The paper is generally well-written.
Weaknesses: The reviewer is concerned about the training of the proposed method. (1) Too many components that require function approximation may bring instability into the MARL training process. (2) In Figure 3 of Appendix E, the proposed method seems to be very sensitive to hyperparameters.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Have the authors encountered instability problems when training as there are so many components combined with the MARL algorithm?
2. In Table 3 of Appendix F, different tasks require different hyperparameters. How did the authors choose these hyperparameters? Is it possible to find a suitable set of hyperparameters that can be suitable for most tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: This paper does not discuss the limitations. The biggest limitation might be that this method is too sensitive to hyperparameters, making it difficult to apply it directly to new tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments. We hope we can address your concerns below.
**Q1**: Have the authors encountered instability problems when training as there are so many components combined with the MARL algorithm?
**A1**: Yes, in the early version of HMASD, we found that HMASD performed poorly even in the dense reward tasks. We have iterated 28 versions to achieve the performance in the paper, including the problem formulation of multi-agent skill discovery, the way of training skills, the model structure, the hyperparameters, etc.
**Q2**: In Table 3 of Appendix F, different tasks require different hyperparameters. How did the authors choose these hyperparameters? Is it possible to find a suitable set of hyperparameters that can be suitable for most tasks?
**A2**: We mainly adjust hyperparameters through grid-search. There are 8 important hyperparameters in HMASD as shown in Table 3 of Appendix F. We don't directly perform grid-search on all 8 hyperparameters. We divide the 8 hyperparameters into 3 groups, i.e., $(k, n_Z, n_z), (\lambda_h, \lambda_l), (\lambda_e, \lambda_D, \lambda_d)$. We first perform grid-search on $(k, n_Z, n_z)$ and fix them. The next grid-search is on $(\lambda_e, \lambda_D, \lambda_d)$ and the last is on $(\lambda_h, \lambda_l)$. Although the final hyperparameters obtained in this way may be not the best, it can greatly reduce tuning time. Among 8 hyperparameters, we find that the skill interval $k$, the number of team skills $n_Z$ and the number of individual skills $n_z$ are the three most important hyperparameters. The ablations on the three hyperparameters can be found in Appendix E. The performance of HMASD varies greatly with different $k, n_Z, n_z$. These hyperparameters mainly depend on the specific task, especially the number of skills for solving the task. It is possible to find a suitable set of hyperparameters that can generalize to a set of similar tasks.
**Q3**: This paper does not discuss the limitations. The biggest limitation might be that this method is too sensitive to hyperparameters, making it difficult to apply it directly to new tasks.
**A3**: HMASD has two main limitations. Firstly, as mentioned in lines 320-321, when the state-observation space is large, HMASD can discover diverse skills but maybe only some of them (about $24$% on SMAC) are useful for the team reward. There are a large percentage of skills are assigned to explore those zero-reward states. Secondly, as mentioned in line 52 of Appendix E, the number of team skills and individual skills should be carefully adjusted. When applying HMASD to a new task, it require to adjust several hyperparameters as shown in Table 3 of Appendix F. Our future work aims to improve these limitations and make HMASD better at solving sparse-reward multi-agent tasks.
Finally, thank you again for your recognition and positive review to our work. We will incorporate your suggestions into our next revision. | Rebuttal 1:
Rebuttal: We have uploaded a one-page PDF containing a new figure that visualizes the learned individual skills on the SMAC scenario 3m.
Pdf: /pdf/29e24de865cbac59eaad54cbfd675aa65cdd1cbd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents Hierarchical Multi-Agent Skill Discovery (HMASD) that can discover both team and individual skills in MARL. The authors formulate multi-agent skill discovery as an inference problem in probabilistic graphical models. The model consists of a skill coordinator that reasons about team and individual skills, a skill discoverer that maps skills into actual execution, and a skill discriminator that encourages the learning of diverse and distinguishable skills. The proposed method is evaluated on sparse reward multiagent benchmark including SMAC and overcooked and is shown to achieve superior performance comparing to baselines. Ablation studies are also done to verify the effectiveness of each proposed components.
Strengths: - The proposed method is intersting.
- The paper is nicely structured.
Weaknesses: There is a bit limited given only training curves are shown. More fine-grained experiments like visualization of learned skills, progression of skill learning, etc., may provide more insight into the effectiveness of the proposed method. Overall the experiment is on a low side and see more detailed comments in the questions section.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - In figure 6, there is no orange line and does that mean MAPPO fails to learn anything?
- By comparing figure 6 and 7, the performance of MASER is so different. In figure 6, MASER is the only baseline that works; however, in figure 7, MASER is the baseline that works the worst. Could authors briefly explain why this is the case?
- Why conduct experiment in SMAC rather than SMACv2?
- I cannot find the 3m scenario in SMAC [40]. Does it mean 3 marine or MMM2?
- How do the authors deal with the heterogenous agents in SMAC 2s_vs_1sc and 2m_vs_1z?
- It would be interesting to visualize the learned skills in both SMAC and overcooked environments.
- In the paragraph at line 314, the authors discuss interesting observation of how skill learning is done. It will be more convincing if some quantitative results can be shown here to support the statement.
- In spite of the limited space, it's better to include the related work section in the main paper to make it more self-contained.
- The variances of HMASD in 2s_vs_1sc, 2m_vs_1z, and coordination_ring are extremely large and the lower bounds of the shaded area are at zero, making it hard to justify the effectiveness of the method. What is the percentage of the model to learn meaningful behavior among all trials?
- How many random seeds are used for plotting those training curves?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitation of this work is not explicitly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed comments. We hope we can address your concerns below.
**Q1**: In figure 6, there is no orange line and does that mean MAPPO fails to learn anything?
**A1**: Yes, the orange line is covered by the blue and green lines. Both MAT and MAPPO fail to learn anything on SMAC with 0-1 reward.
**Q2**: In figure 6, MASER is the only baseline that works; however, in figure 7, MASER is the baseline that works the worst. Could authors briefly explain why this is the case?
**A2**: MASER proposes to automatically generate subgoals for agents, and it only demonstrates the effectiveness on the SMAC tasks in the original paper. In our work, we find that MASER works only in SMAC but fails in Overcooked. There are other works[1,2] that also find that MASER performs poorly on other multi-agent tasks. This may be because the automatic subgoal generation mechanism of MASER may only work in some specific tasks, e.g., SMAC.
**Q3**: Why conduct experiment in SMAC rather than SMACv2?
**A3**: SMACv2 was proposed because SMAC with dense reward is so easy that many algorithms (such as MAT and MAPPO) can achieve almost 100% win rate on all dense-reward SMAC scenarios. In our work, we aim to solve challenging multi-agent tasks with sparse reward. And in our early experiments, we found that SMAC with 0-1 reward is hard enough to compare the performance of different algorithms. So we choose SMAC rather than SMACv2.
**Q4**: I cannot find the 3m scenario in SMAC. Does it mean 3 marine or MMM2?
**A4**: The original SMAC paper introduces only 14 scenarios, while SMAC's codebase provides more scenarios, including 3m. The 3m scenario represents 3 allied marines versus 3 enemy marines.
**Q5**: How do the authors deal with the heterogeneous agents in SMAC 2s_vs_1sc and 2m_vs_1z?
**A5**: In both 2s_vs_1sc and 2m_vs_1z, the type of all agents controlled by MARL method is the same. Therefore, they are scenarios with homogeneous agents, not heterogeneous agents. Our method can also deal with heterogeneous agents.
**Q6**: It would be interesting to visualize the learned skills in both SMAC and overcooked environments.
**A6**: Thanks for your suggestion. Due to time limit, we visualize the learned individual skills on the SMAC scenario 3m as shown in Figure 1, which can be found in the one-page PDF of the global response. We can observe that only 1 of 3 skills can result in a non-zero team reward, and the remaining 2 skills explore the state-observation spaces that don’t contribute to the team reward. We will provide more skill visualizations in our next revision.
**Q7**: In the paragraph at line 314, the authors discuss interesting observation of how skill learning is done. It will be more convincing if some quantitative results can be shown here to support the statement.
**A7**: We count the average percentage of useful individual skills among all learned individual skills on the SMAC. For each SMAC scenario, we conduct $5$ different runs with different random seeds. We learn $3, 5, 2$ individual skills for each run on 3m, 2s_vs_1sc, 2m_vs_1z, respectively. Therefore, we learn $3\*5+5\*5+2\*5=50$ individual skills on three SMAC scenarios. After our test, only $12$ individual skills are useful for the team performance. In other words, only $24$% of the learned individual skills are useful for completing the task on average after training.
**Q8**: In spite of the limited space, it's better to include the related work section in the main paper to make it more self-contained.
**A8**: Thanks for your suggestion. We will include the related work section in the main paper in our next revision.
**Q9**: The variances of HMASD in 2s_vs_1sc, 2m_vs_1z, and coordination_ring are extremely large and the lower bounds of the shaded area are at zero, making it hard to justify the effectiveness of the method. What is the percentage of the model to learn meaningful behavior among all trials?
**A9**: For every scenario, we conduct $5$ runs. For each run on the SMAC scenario, once agents discover useful skills, the performance will quickly increase to $1$. So the final performance is either $1$ or $0$, which leads to large variance among $5$ runs. There are $7$ scenarios in our paper. Among $5*7=35$ runs, $26$ runs could learn meaningful behavior. Therefore, the percentage of the model to learn meaningful behavior among all trials is $26/35=74.3$%. One of our future goals is to increase this percentage and reduce the variances of our method.
**Q10**: How many random seeds are used for plotting those training curves?
**A10**: As mentioned in line 268, for each training curve, we show the mean and variance of the performance across five different random seeds.
**Q11**: The limitation of this work is not explicitly discussed in the paper.
**A11**: HMASD has two main limitations. Firstly, as mentioned in lines 320-321, when the state-observation space is large, HMASD can discover diverse skills but maybe only some of them (about $24$% on SMAC) are useful for the team reward. There are a large percentage of skills are assigned to explore those zero-reward states. Secondly, as mentioned in line 52 of Appendix E, the number of team skills and individual skills should be carefully adjusted. When applying HMASD to a new task, it require to adjust several hyperparameters as shown in Table 3 of Appendix F. Our future work aims to improve these limitations and make HMASD better at solving sparse-reward multi-agent tasks.
Finally, thank you again for your thoughtful comments. We will incorporate your suggestions into our next revision. If some of your concerns are addressed, you could consider raising the rating. This is very important for us and we will appreciate it very much.
##### Reference
[1] Yang X, et al. Learning Graph-Enhanced Commander-Executor for Multi-Agent Navigation.
[2] Li W, et al. Semantically Aligned Task Decomposition in Multi-Agent Reinforcement Learning.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the authors' effort on the rebuttal.
Overall, the rebuttal addressed most of my concern. Still, I believe the paper will be much stronger with more fine-grained analysis like A6, A7, A8 along with results requested by the other reviewers. Due to the time limit of the rebuttal, I am satisfied with the additional experiments; but adding more thorough and extensive results for all new experimental analysis should be done in the revision. Hence, I will increase my rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply
Comment: We are happy that we could address most of your concerns. We promise to incorporate our rebuttal results and add more fine-grained analysis in the next revision. | null | null | null | null | null | null |
Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks | Accept (poster) | Summary: The authors propose a form of knowledge distillation for a retriever-reader architecture. It uses rationales to guide the neural reranker to retrieve more relevant passages for reasoning, instead of passing the query to the retriever and retrieving the most similar passages. The paper includes an interesting set of ablations as well as some quantitative analysis and limitations. The generative process is as follows: at training time, a rationale is generated using a LLM. Successively, the rationale is passed to the retriever and it retrieves the top-k passages that are most similar to it Finally, a small LM is fine-tuned with the rationale, retrieved passages and question.
At inference time, the question is passed to the retriever, and top-k passages relevant to the question are retrieved (with BM25 plus a neural reranker which helps correct the initial ranking that is closer to the question than to the rationale). Then, a rationale is generated conditional con the reranked passages and input query.
Strengths: Originality and significance: the authors' contribution is a nice application of both distillation and retrieval or knowledge augmentation for LMs.
Clarity: the paper is well written and it is clear and easy to read.
Weaknesses: From the experiments section, it is not clear what type of retriever the baseline methods that include knowledge augmentation, is it a dense retriever or BM25? If it is the former, which specific encoder/decoder are used and in which task they are fine-tuned on?
Here, I am assuming that the retriever is with BM25 in the baselines:
Such comparisons to more than one neural retrieval augmented language models are important to paint a full picture of the contribution. One possibility is to use NQ, TriviaQA, other Q&A evaluation datasets or the KILT datasets to understand whether both the retrieved passages and the rationale augmentation is useful for smaller LMs for simpler Q&A tasks besides the reasoning Q&A used in the paper.
Alternatively, comparisons to other retrieval-augmented language models using the same benchmarks included in the paper can also help to quantitatively assess its performance grounding the claims that the cited paper mentions (i.e. that these models are not good for complex reasoning tasks). --> This experiment has been done during the rebuttal
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please mention the encoder that is used to initialise the reranker in the main text, I have to tease it out of the appendix.
Would it make sense to do a comparisons to augmentation with rationales plus filtering: Zelikman et al. (2022) STaR: Bootstrapping Reasoning With Reasoning or do we expect to always useful valid rationales?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors include a comprehensive potential societal impact section in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive and helpful comments. We initially address all your concerns and questions below:
---
> W1. It is not clear what type of retriever the baseline methods that include knowledge augmentation.
Thank you for pointing it out and we will include more details about the retriever in Section 5.1. As you mentioned, we use BM25 as the retriever for the baseline methods, as described in Section B of the Supplementary File.
---
> W2. Evaluation of proposed method on simple QA datasets (NQ, TriviaQA, KILT)
Thank you for your suggestion. However, we would like to emphasize that the primary target of our work is on reasoning distillation; therefore, we hope you understand that improving on the simple QA datasets, which may not require the complex reasoning ability of LMs, is neither our focus nor the scope of this work.
---
> W3. Comparisons to other retrieval-augmented language models using the same benchmarks included in the paper can also help to quantitatively assess its performance grounding the claims that the cited paper mentions (i.e., that these models are not good for complex reasoning tasks)
Thank you for your suggestion.
We respectfully emphasize that our main argument is that **knowledge augmentation is important when conducting reasoning distillation** for fine-tuning small LMs, especially for knowledge-intensive reasoning tasks, to supplement the limited capacity of small LM for memorizing knowledge.
Therefore, a knowledge-augmented (retrieval-augmented) LM is not a direct competitor with our proposed method.
Rather, it is one of the possible backbone networks that we can use for reasoning distillation with our method KARD.
Nevertheless, following your suggestion, we compared against Atlas [1], which is a state-of-the-art open-source knowledge-augmented LM, on the MedQA-USMLE dataset to measure the capability of knowledge-augmented LM on the tasks requiring complex reasoning ability.
In the experiment on the MedQA dataset, the Atlas-base model with 220M parameters shows accuracy of **31.03**, which is comparable to the fine-tuned Flan-T5 with 250M parameters but **significantly worse** than our method KARD with accuracy of **38.15**. This result implies that **Atlas has limited reasoning ability**, especially in the domain requiring expert knowledge, which emphasizes the importance of reasoning distillation. Motivated by our findings, incorporating reasoning distillation on knowledge-augmented LM can be a meaningful research direction for future work.
[1] Izacard et al., Atlas: Few-shot Learning with Retrieval Augmented Language Models, 2022
---
> Q1. Please mention the encoder that is used to initialize the reranker in the main text.
Thank you for the suggestion. As described in lines 109-114 of the Supplementary File, the encoders for MedQA and StrategyQA are initialized with BioLinkBERT-base and LinkBERT-base, respectively. We will specify details about the reranker in Section 5.1 of the main paper.
---
> Q2. Would it make sense to do a comparison to augmentation with rationales + filtering as done in Zelikman et al. (2022)?
Thank you for the suggestion. We would like to emphasize that “filtering” is orthogonal to knowledge augmentation. We can apply any filtering method to our KARD. Furthermore, we have already done it in our experiments by removing wrong rationales generated from large language models following Ho et al. [2] (See lines 82-89 of Appendix).
[1] Zelikman et al., Star: Bootstrapping reasoning with reasoning, NeurIPS 2022.
[2] Ho et al., Large language models are reasoning teachers, ACL 2023.
---
Rebuttal Comment 1.1:
Title: Rebuttal acknowledgement.
Comment: Thank you to the authors for answering my questions and for conducting further experiments that highlight their contribution's main strengths. Specifically, I found very useful to see the ablation were they used the PubMed corpus instead of wikipedia (and KARD performs worse), the comparison against Atlas and the comparison of KARD with reasoning distillation + RAG. I encourage you to include these in the camera ready version of the paper. For these reasons, I don't have any other concerns and I am increasing my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad that our rebuttal addressed all of your concerns, and we are happy to see the reviewer raised the score.
As you suggested, we will include discussions and experimental results against other retrieval-augmented methods in a future revision.
Once again, we thank the reviewer, as your insights are invaluable in guiding our revisions.
We are heartened by your positive reception of our work and will update it following your suggestions. | Summary: The paper focuses on distilling the chain-of-thought reasoning capability from large LMs to small LMs in knowledge-intensive tasks. Since small LMs do not encode sufficient knowledge required for reasoning, the paper proposes to augment small models with a knowledge retriever that obtains relevant documents for a given task. Experiments show that the proposed method leads to more successful knowledge distillation, especially when the LM size is smaller.
Strengths: 1. The proposed idea is well-motivated and sound.
2. Sufficient experiments and detailed analysis are provided to demonstrate the effectiveness of the method.
Weaknesses: One concern I have is the consequence of using multiple rationales to train the small LM, since this would misguide the model to learn that the answer prediction does not rely on the rationale. This may further lead to shortcut reasoning. I would suggest using simulation-based metrics to evaluate the faithfulness of the rationales to see if this is the case. Or you can randomly corrupt the generated rationales and see if the answer prediction is affected.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: How do you obtain the rationales from the large LM? The chain-of-thought prompting asks the LM to generate the rationale and then the answer. Do you simply provide the answer to the large LM and ask it to rationalize the answer?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are well discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive and helpful comments. We initially address all your concerns and questions below:
---
> W1. One concern: The consequence of using multiple rationales to train the small LM, since this would misguide the model to learn that answer prediction does not rely on the rationale. This may further lead to shortcut reasoning.
Thank you for your insightful comment. We have also noticed this problem to some extent, therefore, we first filter some erroneous rationales which lead to false prediction, following procedures described in lines 82-89 of the Appendix. In other words, if a small Flan T5 model makes a wrong prediction with the rationale generated by ChatGPT, such rationale is discarded. Furthermore, we have conducted analysis on the rationale diversity in Table 2 of the main paper. Corresponding results show that increasing the diversity of rationale (i.e., using multiple rationales) leads to better performance of small language models.
---
> Q1. How do you obtain the rationales from the large LM?
We use chain-of-thought prompting to generate rationales from the large LM. We explain the detailed procedure in lines 82-89 of the Appendix and provide the example prompt in Tables 4 and 5 of the Appendix. | Summary: This paper proposes a retrieval-augmented knowledge distillation approach for QA tasks. This approach, KARD, extends reasoning distillation, which uses an LLM such as GPT-3.5 as a teacher model and distills a student model by learning from question and rationale pairs (generative loss). KARD has a retriever that obtains relevant documents based on the rationale. The retrieved documents are used for training a student model. In addition, a reranker is independently trained to select more relevant documents.
This approach is evaluated on two QA datasets: one from the biomedical domain, MedQA-USMLE, and StrategyQA, which covers more general and popular entities. EN Wikipedia is used as a knowledge base throughout experiments. The proposed approach, KARD (reasoning distillation+reranker), is compared with prompting approaches (e.g., few-shor, CoT) and other finetuning-based approaches (e.g., standard FT and knowledge-augmented FT). KARD outperforms all baselines by various margins depending on the datasets and model sizes. In addition, the authors provide analyses on several factors such as the number of rationales.
Strengths: - This work proposes KARD, which integrates retrieval and reranking modules into the distillation framework.
- The experimental results support the effectiveness of this approach particularly on the QA task from the medical domain.
- Overall, this paper is well-written and easy to follow. The detailed analysis on model configurations is provided.
Weaknesses: - Although this approach outperforms all the baselines, its gains become marginal with large model sizes, specifically in the cases of MedQA-USMLE and StrategyQA. On the other hand, KARD demonstrates a performance comparable to that of the oracle, ChatGPT, on StrategyQA, but lags significantly behind on MedQA-USMLE. The inconsistency in these results presents a challenge in interpretation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Related to the first point about weaknesses, I believe including an analysis of the datasets (for instance, entity types and the diversity of entities) could be helpful. This is because the two datasets seem sufficiently different, and these differences might explain the results.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive and helpful comments. We initially address all your concerns and questions below:
---
> W1. Gains become marginal with large model sizes.
As stated in the main paper lines 108-110, memorization of training data is essential for achieving good performance in language tasks [1] and memorization capacity is proven to be proportional to the model size [2]. Thus, larger models with better memorization may less rely on the retrieved knowledge for answering questions, and thus gains from retrieval become small as the model size increases. We empirically observe this pattern in our experiment and have clearly discussed it in lines 261-266.
Nevertheless, the largest XL model augmented with silver documents still outperforms the model without any knowledge augmentation in the MedQA-USMLE dataset. It implies that there is a potential of larger gains with better retrieval models, which opens up a promising direction for future work.
[1] Brown et al., When is memorization of irrelevant training data necessary for high-accuracy learning?, STOC 2021.
[2] Kim et al., Provable memorization capacity of transformers, ICLR 2023.
---
> W2. KARD lags significantly behind on MedQA-USMLE compared to ChatGPT but not much on StrategyQA. The inconsistency in these results presents a challenge in interpretation.
We think that such a discrepancy comes from different characteristics between MedQA and StrategyQA (See examples in Tables 11 and 12 in the Appendix). Specifically, the questions in MedQA are typically longer and contain more entities compared to the ones in StrategyQA, which requires a language model to leverage a significant amount of more knowledge to answer the questions in MedQA. Thus it is challenging for the smaller language model with the limited knowledge capacity to outperform larger language models like ChatGPT in MedQA, even when it is coupled with the knowledge retrieval. However, since the model augmented with silver knowledge achieves better performance than the one with knowledge retrieved by our reranker, we believe the gap between small and large language models can be further reduced with better retrievers.
---
> Q1. Regarding W2, including an analysis of the datasets could be helpful.
Thank you for your valuable suggestion. We include statistics of each dataset including the average number of entities and words in each question as follows:
| |MedQA-USMLE | StrategyQA|
|:---:|:---:|:---:|
|Average number of entities | 35.72 | 1.19 |
|Average number of words | 133.59 | 10.6 |
As we can observe, questions in MedQA-USMLE generally have longer questions having more entities than questions in StrategyQA. | Summary: In this paper, we propose the KARD model for small model Q&A through knowledge distillation + KB retrieval. The authors show experimentally that the model can outperform other models of 3B using only 250M parameters.
Strengths: 1. a model of LLM knowledge distillation + KB retrieval is proposed.
2. KARD outperforms other fine-tuning models.
3. a neural reranker based on similarity of rationales and passages is proposed.
4. the paper is easy to read.
Weaknesses: 1. a problem of the paper is the experimental design. In Table 1, the authors do not compare other knowledge-augmented LMs. this leads to experimental comparisons that are inadequate and unfair.
2. The finding of the paper-enhancing model effectiveness through KB retrieval-is not that surprising. This leads to a possible lack of innovation throughout the paper.
3. for MedQA-USMLE, using wikipedia as KB may not be as effective as using specialized medical KB.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What if KARD switches to use a medical KB for MedQA-USMLE?
2. What is the time cost to train a reranker, as you need to compute the similarity between each pair of Wiki articles and training samples.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed potential technical limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your constructive comments. We faithfully addressed all your concerns and questions below:
---
> W1. Comparison against other knowledge-augmented LMs.
Thank you for your suggestion. Please note that our main argument is that **knowledge augmentation is important when conducting reasoning distillation** when finetuning small LMs, especially for knowledge-intensive reasoning tasks, to supplement the limited capacity of small LM for memorizing knowledge. Therefore, **a knowledge-augmented LM is not a direct competitor with our proposed method**. Rather, it is one of the possible base model that can benefit from reasoning distillation with our KARD.
Nevertheless, following your suggestion, we compared against Atlas [1], which is a state-of-the-art open-source knowledge-augmented LM, on the MedQA-USMLE dataset to measure the capability of knowledge-augmented LM on the tasks that require complex reasoning ability. In the experiment on the MedQA dataset, the Atlas-base model with 220M parameters shows accuracy of **31.03**, which is **significantly worse** than our method **KARD** with accuracy of **38.15**. It implies that **Atlas has limited reasoning ability**, especially in the domain requiring expert knowledge, which emphasizes the importance of reasoning distillation. Motivated by our findings, incorporating reasoning distillation on knowledge-augmented LM can be a meaningful research direction for future work.
[1] Izacard et al., Atlas: Few-shot Learning with Retrieval Augmented Language Models, 2022
---
> W2. The finding of the paper, enhancing model effectiveness through KB retrieval, is not that surprising. This leads to a possible lack of innovation throughout the paper.
We respectfully disagree with reviewer's point about the lack of innovation in our work because **this comment seems to be based on a critical misunderstanding of our work.** Our finding is **not limited** to enhancing model effectiveness through KB retrieval, and our method is not a straightforward combination of KB retrieval and reasoning distillation. Rather, our main contribution is in identifying the underrepresented issue that arises in the **reasoning distillation paradigm** due to the limited capacity of small language models to memorize knowledge, as clearly indicated in lines 42-43 of the main paper and identified by other reviewers (jTmB, 7NNq, AUtH).
Also, our method is motivated by our theoretical analysis, which shows that knowledge augmentation with retrieval reduces the amount of memorization to perform well on knowledge-intensive reasoning tasks, and we find that the knowledge augmentation during the reasoning distillation enhances the performance of small models in knowledge-intensive reasoning tasks, especially in the domains that require knowledge not included in general LMs, such as the medical domain. Furthermore, we also introduce a novel reranker to retrieve documents relevant to generating rationales that lead to answering correct answers.
---
> W3 & Q1. Using Wikipedia as KB may not be as effective as using specialized medical KB. What if KARD switches to use a medical KB for MedQA-USMLE?
We use Wikipedia as a KB in experiments due to its generalizability across diverse domains. Furthermore, since Wikipedia also contains in-depth information about the medical and general healthcare domain, using Wikipedia as a KB for medical-oriented tasks is not very limiting (reference: https://en.wikipedia.org/wiki/Health_information_on_Wikipedia). Please refer to the example in Appendix Table 11, where the passage includes compound medical knowledge though it is retrieved from Wikipedia.
Moreover, following your suggestion, we experiment with PubMed as a knowledge base and summarize the results in the below table (initial experimental results already exist in Table 1 of Supplementary File). The results show that KARD with PubMed outperforms standard reasoning distillation but underperforms KARD with Wikipedia due to the limited ability of BM25 to retrieve relevant documents from the medical domain corpus.
Specifically, considering the remarkable performance gain of KARD with silver knowledge compared to BM25, the PubMed knowledge base is a valuable resource for knowledge augmentation. However, BM25 with a question as a query struggles to retrieve relevant documents from Pubmed's knowledge base. This observation shows a future research direction on the retrieval method tailored to the medical knowledge base.
| Flan-T5 Base | Wikipedia | PubMed |
|------------------------|:---------:|:------:|
| Reasoning Distillation | 31.03 | 31.03 |
| KARD (BM25) | 33.14 | 31.58 |
| KARD (Reranker) | 38.15 | 36.84 |
| KARD (Silver knowledge) | 40.30 | 45.48 |
---
> Q2. What is the time cost to train a reranker for computing the similarity between each pair of Wiki articles and training samples?
In the training phase of the reranker, the initial step involves retrieving a set of candidate documents following the procedure outlined in lines 205-207 of the main paper, where the retriever takes about a second per a single training instance. Importantly, we can cache these candidate documents before the reranker training begins. As a result, there is no extra cost for computing similarity between all pairs of Wiki articles and training samples during reranker training.
In other words, we only need to load cached candidate documents for each instance in the minibatch. We summarize the wall-time for each operation in a single training iteration below to show that the time consumption for loading cached candidate documents is negligible compared to forward and backward computation for model training.
| | data load | forward | backward |
|--------------------|-----------|---------|----------|
| Wall time (second) | 0.01 | 0.9 | 0.14 |
| Ratio | ~1% | 85.7% | 13.3% |
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the reply. My main concern is about W2.
Reasoning distillation is an important problem. Based on this, introducing the paradigm of knowledge augmentation is an innovation. However, I feel that this innovation is not hard to think of. I hope the author can answer the following questions:
1. Is introducing knowledge augmentation to reasoning distillation a challenging problem? What are the new challenges compared to traditional retrieval augmentation? Based on my understanding, the author's approach is similar to traditional retrieval augmentation methods, except that the knowledge is generated by a Large LM.
2. For the paradigm of knowledge augmentation in reasoning distillation, what new insights or inspirations can this paper offer to readers? As I said, introducing knowledge augmentation in reasoning distillation can enhance the performance, which is not something that is particularly surprising.
---
Reply to Comment 1.1.1:
Title: Response by Authors (1/2)
Comment: Thank you for reading our response and then providing additional comments on it. We sincerely appreciate your follow-up response and we would like to address your remaining concerns below.
First of all, thank you for acknowledging that **reasoning distillation is an important problem**, and our contribution on introducing the paradigm of **knowledge augmentation into reasoning distillation is an innovation**. On the other hand, we understand your concern that this innovation may not be hard to think of, since there are some prior works on knowledge augmentation methods for language models. We answer your questions to address your concerns below.
---
> **Q1.** Is introducing knowledge augmentation to reasoning distillation a challenging problem?
Yes. Introducing knowledge augmentation to reasoning distillation is a challenging problem, which is significantly different from the challenge of previous retrieval-augmented methods.
First of all, the core idea and challenge of reasoning distillation is to train the small language model to generate “rationales” generated by large language models.
Regarding this challenge, we theoretically motivate that the existing naive reasoning distillation is suboptimal for knowledge-intensive tasks due to the lack of memorization capacity of small LMs. At this point, to supplement memorization capacity, our method needs an additional module to retrieve the **relevant knowledge from the external Knowledge Base (KB) to generate the high-quality rationale**. We believe this aspect significantly distinguish our method from existing retrieval augmented methods.
In particular, as we discussed in lines 96-100, previous methods like RAG [1] do not consider intermediate rationales which are crucial for complex knowledge-intensive tasks. Rather, they retrieve documents with a question as a query, and a language model generates an answer based on the question and retrieved documents in both training and inference times. However, in our experiments, we empirically found that it is difficult to retrieve appropriate documents with a question as a query, which implies that existing knowledge augmentation methods (e.g., Atlas/RAG) can underperform our method in reasoning distillation (see below paragraphs for experimental comparison against RAG).
In contrast, in our knowledge-augmented reasoning distillation, we can obtain the helpful knowledge to generate the rationale by retrieving the document with the **rationale** generated by the LLM as a query for the retriever during the training. However, as we discussed in lines 100-106 of the main paper, **this approach poses a new challenge** that the retriever should retrieve the helpful knowledge to generate the rationale with the “question” as a query in the inference stage, in order to generate meaningful rationales. To address this novel challenge, we introduce the algorithm to train the neural reranker tailored to knowledge-augmented reasoning distillation in Section 4.2 and empirically show clear effectiveness of our proposed reranker.
Furthermore, we agree that traditional retrieval-augmented methods are applicable to reasoning distillation and some readers might be curious about the effectiveness of traditional retrieval-augmented methods in reasoning distillation. To address this question, **we additionally conduct experiments on reasoning distillation with RAG** on two datasets we used in our previous experiments with the base-sized model having 250M parameters and then present their results in the below table.
| (Flan-)T5 Base | MedQA-USMLE | StrategyQA |
|---|:---:|:---:|
| Knowledge-augmented Fine-tuning | 33.39 | 52.11 |
| RAG + Reasoning Distillation | 24.84 | 54.24 |
| KARD (Reranker) | **38.15** | **56.57** |
From the above empirical observation, we can emphasize that our method has significant advantages compared to the traditional retrieval augmented methods for reasoning distillation from the following three viewpoints:
- Our knowledge augmentation method is **more effective** in reasoning distillation than the traditional methods, as KARD outperforms RAG in both datasets in performance.
- Our method is **more efficient** than the traditional methods. We emphasize that RAG costs roughly **8 times more** computational budget (GPU memory) than KARD since it needs marginalization over retrieved documents in training.
- Our method is **more versatile** than the traditional methods. As clearly shown in experiments on our answer to W1 of Reviewer jTmB, our method is **applicable to both encoder-decoder (T5) and decoder-only (GPT) language models** in contrast to RAG which is only applicable to encoder-decoder language models.
We will include this discussion in the future revision.
[1] Lewis et al., Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, NeurIPS 2020.
---
Reply to Comment 1.1.2:
Title: Thank you for the score adjustment
Comment: We feel deeply grateful that the reviewer has read our response and adjusted the score accordingly.
We would greatly appreciate any feedback on the remaining concerns, so that we can discuss and possibly further improve our work.
Thank you again for your insightful questions and constructive suggestions. We are looking forward to your response. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your considerable efforts in reviewing our paper and providing insightful reviews to our work. We appreciate that reviewers find our proposed idea well-motivated and sound.
We have responded to the individual comments from the reviewers below, and believe that we have successfully addressed all of them. To summarize our response,
- **[jTmB]** We include additional experimental results on one more dataset (OpenbookQA [1]) and a decoder-only pre-trained language model (OPT [2]).
- **[RfCw, Z8sP]** We discuss the relation of our work against a knowledge-augmented language model, including a comparison with Atlas [3].
- **[jTmB, RfCW]** We discuss the use of the medical knowledge base (PubMed corpus) as an alternative to Wikipedia for the MedQA-USMLE task.
- **[jTmB, 7NNq]** We include more discussions and analyses on the experimental results of Table 1.
- **[7NNq]** We conduct an analysis of the dataset statistics to compare the different features of both datasets used in experiments.
- **[RfCw]** We analyze and discuss the time cost to train a reranker.
We respectfully recommend the reviewer read our response and leave a comment if any concern or question still remains.
Sincerely,
Authors
[1] Mihaylov et al., Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering, EMNLP 2018.
[2] Zhang et al., OPT: Open Pre-trained Transformer Language Models, preprint 2022
[3] Izacard et al., Atlas: Few-shot Learning with Retrieval Augmented Language Models, 2022 | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper deals with the challenge of utilizing small LMs in knowledge intensive tasks. As recent LLMs have shown promising capabilities in tasks that require reasoning, however, deployment of such model can remain limited due to cost or data limitations. Thus, the authors turn to face the challenge of reasoning distillation to other LMs, mainly smaller LMs that might be more feasible for deployment. However, small LLMs are limited by their inferior limited capacity of knowledge and understanding, therefore an external source of relevant context could be utilized for bridging this gap.
The authors present Knowledge augmented reasoning distillation (KARD) that fine-tunes smaller LMs to generate rationals, with the aid of a LLM with high reasoning capabilities, and to augment its knowledge with relevant contexts from an external knowledge base, to answer complex questions. KARD utilized baseline approaches to tackle knowledge intensive tasks, such as CoT prompting to the LLM and fine-tuning a neural document re-ranker to retrieve high quality contexts during inference when access to a large LLM is unavailable.
The authors test their system on two knowledge intensive benchmarks that require some level of reasoning using one of multiple sources and/or steps (MedQA-USMLE and StrategyQA). The evaluations show that using KARD with a 250M parameter model shows superior performance compared to fine tuned models, knowledge augmented and in few-shot settings. Furthermore, the authors analyze how model size, training set size and reranker setup affects the performance of KARD.
The main contributions of the paper are: (1) novel method that combines reasoning distillation with knowledge augmentation using neural rerankers. (2) analysis showing that small LM (250M) are not sufficient for knowledge intensive tasks in domain specific cases. (3) KARD's performance vs. various baselines and technique is superior.
Strengths: - The challenge presented is of high importance for real-world applications and has high value.
- Experimental setup, baselines chosen and evaluation results of KARD on MedQA-USMLE and StrategyQA are convincing and significant with clear accuracy improvement of smaller LMs (and also larger) compared to the different baselines. Proving the paper's main claim.
- Improving the neural re-ranker using the LLM's rationals aids KARD to find relevant contexts during inference, and beyond naive approaches like BM25.
- Well written with clear problem presentation, concept explanations, mathematical notations and definitions. It is easy to follow the concepts in the paper.
Weaknesses: - Limited evaluation on different datasets (only 2) and models (T5/Flan-T5). In addition, performance of KARD methods vs. reasoning distillation in the StrategyQA task is limited compared to MedQA. The authors address this in the limitations section.
- A generic external knowledge base (wikipedia) that might not be suitable for medical-oriented tasks such as MedQA for example. We can further see evidence to this looking at the KARD (silver knowledge, oracle) evaluation where the gap between that model and with the best KARD is smaller vs. the large LLM (chatGPT).
- Somewhat limited development (or naive approach) of the main contribution of the paper, that is the combination of neural ranker with reasoning distillation. To the authors credit they have addressed this in the paper. The authors address this in the limitations section.
- Authors have addressed but did not evaluate a joint objective functions of neural ranker + distillation.
- Despite authors addressing many of the paper's limitations, I find it significantly hinders the reliability of work done in the paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - The objective functions of the ranker and distillation (L_rerank and L_distill_kb) are independent. Why is that? have you evaluated combining both model updates?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors broadly addressed the major weaknesses and limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time and effort in providing constructive feedback, and we address your concern and questions below.
---
> W1 & W6. Limited evaluation which significantly hinders the reliability of work
We perform additional experiments on another dataset OpenbookQA [1] and with another decoder-only language model (OPT [2]). As shown in the below table, our KARD outperforms other baselines in those experimental settings, showcasing its generalizability.
||OpenbookQA | OpenbookQA | OpenbookQA | MedQA-USMLE |
|:---|:---:|:---:|:---:|:---:|
||T5-base| T5-large | T5-xl | OPT-350M |
|Fine-tuning|54.0|62.0|74.6|26.47|
|Knowledge-augmented FT|53.8|64.6|73.8|25.84|
|Reasoning Distillation|58.2|65.8|76.2|28.67|
|KARD (BM25)|55.4|65.4|75.6|31.26|
|KARD (Reranker)|**59.2**|**66.2**|**78.6**|**34.25**|
[1] Mihaylov et al., Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering, EMNLP 2018.
[2] Zhang et al., OPT: Open Pre-trained Transformer Language Models, preprint 2022
---
> W2. The performance improvement is limited in the StrategyQA task compared to the MedQA
We believe the performance improvement in StrategyQA task is significant, considering that KARD enables small models to achieve performance comparable to models with rationales from ChatGPT, especially in Large and XL size models.
The performance gain may seem limited due to a smaller performance gap between the model without any knowledge augmentation and the Oracle model on StrategyQA compared to MedQA.
---
> W3. A generic external knowledge base (wikipedia) may not be suitable for the medical-oriented tasks.
We use Wikipedia as a KB in experiments due to its generalizability across diverse domains. Furthermore, since Wikipedia also contains in-depth information about the medical and general healthcare domain, using Wikipedia as a KB for medical-oriented tasks is suitable (reference: https://en.wikipedia.org/wiki/Health_information_on_Wikipedia). Please refer to the example in Appendix Table 11, where the passage includes compound medical knowledge though it is retrieved from Wikipedia.
Moreover, we experiment with PubMed as a knowledge base and show the results in the below table (initial experimental results already exist in Table 1 of Supplementary File). The results show that KARD with Pubmed outperforms standard reasoning distillation but underperforms KARD with Wikipedia due to the limited retrieval with BM25 on medical-domain passages.
Specifically, considering the huge gap between KARD with BM25 and KARD with silver knowledge performance, PubMed’s knowledge base contains highly informative passages. However, BM25 with a question as a query has a limited ability to retrieve them, which results in worse candidate documents being reranked than the candidate documents retrieved from Wikipedia. This observation suggests a future research direction on the retrieval method tailored to the medical knowledge base.
| Flan-T5 Base | Wikipedia | PubMed |
|---|:---:|:---:|
| Reasoning Distillation | 31.03 | 31.03 |
| KARD (BM25) | 33.14 | 31.58 |
| KARD (Reranker) | 38.15 | 36.84 |
| KARD (Silver knowledge, oracle)| 40.30 | 45.48 |
---
> W4. Somewhat limited development (or naive approach) of the main contribution of the paper.
The main focus and contribution of our paper is **our novel knowledge-augmented reasoning distillation and reranking methods** with their supporting theoretical and empirical results.
In particular, our main contribution is to **identify the issue arising in the reasoning distillation paradigm** due to the limited capacity of small language models to memorize knowledge. We propose an effective knowledge-augmented reasoning distillation to tackle the issue. Specifically, motivated by our theoretical analysis that knowledge augmentation with a retrieval reduces the amount of memorization to perform well on knowledge-intensive reasoning tasks, we retrieve a set of relevant documents that can guide the language model to correctly answer questions during the reasoning distillation. Lastly, we introduce a neural reranker to retrieve documents relevant to generating rationales that lead to answering correct answers.
---
> W5 & Q1. Reasons for independent training of small LMs and reranker, not joint training.
We would like to emphasize that we do not have to jointly train the reranker when distilling larger language models into smaller models. In particular, when distilling models with our method, we use the rationale that is generated from the large language model and the silver knowledge that is retrieved with the generated rationale (See Lines 187-190); therefore, the **reranking process is not involved during distillation**. On the other hand, our neural reranker is also trained separately by assigning high relevance scores to documents that are related to silver knowledge.
Moreover, it is not straightforward to train the neural reranker along with the loss from distilling language models since reranking itself and choosing relevant documents are not differentiable operations. To enable it, we can alternatively utilize reinforcement learning algorithms [1], however, they are known to suffer from high variances. Also, if we consider retrieved documents as latent variables, we can approximate the marginal likelihood of training objectives with the retrieved top-k documents as done in the previous work [2]. However, since we need a large number of k different documents for better approximation of marginalization during training, it is computationally prohibitive to jointly train the retriever and the distilled language model. We leave addressing such technical challenges as future work.
[1] Wang et al. R^3: Reinforced ranker-reader for open-domain question answering AAAI 2018.
[2] Lewis et al., Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, NeurIPS 2020. | null | null | null | null | null | null |
Robust Bayesian Satisficing | Accept (poster) | Summary: The paper proposes robust Bayesian satisficing, a new setting of BO that is similar to distributionally robust BO. Robust Bayesian satisficing aims to achieve a 'good enough' expected value given by some threshold $\tau$ and relaxed by the distribution distance between a reference and a true distribution. The paper proposes 2 new notions of regret, design an algorithm with regret bounds, and empirically compare this algorithm to DRBO and other suitable baselines.
Strengths: 1. Robust satisficing as a new optimization objective for BO is interesting and presents an alternative 'good enough' objective along with a relaxation based on distribution distance that is sensible. It enables distributionally robust BO in another way that does not involve uncertainty sets.
2. The new regret definitions make sense, and the proposed algorithm is supported by theoretical guarantees on its performance via regret bounds.
3. The empirical evaluations present support the claim that the proposed algorithm performs well with the proposed regret definitions.
4. Overall I believe that this work has relevance to the community, subject to the issues raised in the Weaknesses section being addressed properly.
Weaknesses: Technical concerns:
1. How is the threshold $\tau$ to be selected in a real world problem? The paper states that it 'can be expressed as a percentage of the SO solution', but when $f$ is unknown, the expected value of the SO solution is unknown as well. From Algorithm 1, $\tau$ is an input prior to any BO rounds. This is an important question to answer, since one of the claimed advantages over DRBO is that there is no need to pick uncertainty sets which may be unknown a priori. But now you have to pick $\tau$ which is also unknown a priori, so it seems that you have replaced one hyperparameter for another. In order to satisfy Assumption 1, it is claimed that $\tau$ can be dynamically selected, but that makes the regret and thus the regret bounds not well-defined, since the regret is a function of a constant $\tau$. If $\tau$ is to be dynamically selected and learned, then the algorithm and regret bounds should be explicitly written to take this into consideration, instead of claiming in a footnote that it 'can be straightforwardly adapted to work with dynamic thresholds'.
2. The experiments in the main paper are simple synthetic benchmarks which by themselves are not comprehensive enough an empirical evaluation. I see that you have an interesting real world benchmark on insulin dose allocation in the appendix, why is it hidden there without a reference from the main paper? Why aren't WRBO and SO tested on that benchmark?
Clarity issues:
1. The equation below line 102 defining $\kappa_{\tau, t}$ and the preceding sentence does not quite make sense. It seems to me that you do not need that sentence and equation, simply define fragility as in Eq. (1), and then the following sentence makes clear what $x_t^*$ and $\kappa_{\tau, t}$ are.
2. In all the figures, heatmaps are uninterpretable without a colorbar indicating the values that each color corresponds to.
3. Lemma 1 uses the maximum information gain $\gamma$ before it is defined, and includes a definition for determinant when determinant is not used there.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: No additional questions other than those raised in Weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and valuable insights.
### Weaknesses
#### Technical Concerns
- **(...how $\tau$ can be selected in a real-world problem...)** One real-world experiment concerning safe dose allocation for diabetes patients is presented in the supplementary document. Type 1 Diabetes Mellitus (T1DM) patients require bolus insulin doses (id) after meals for postprandial blood glucose (pbg) regulation. One of the most important factors that affect pbg is meal carbohydrate (cho) intake [C]. Let ${\cal X}$ and ${\cal C}$ represent admissible id and cho values. For $x \in {\cal X}$, $c \in {\cal C}$, let $g(x,c)$ represent the corresponding (expected) bpg value. Function $g$ depends on the patient's characteristics and can be regarded as unknown. The main goal of pbg regulation is to keep pbg close to a target level $K$ in order to prevent two potentially life threatening events called hypoglycemia (e.g., pbg $<$ 70 mg/dl) and hyperglycemia (e.g., pbg $>$ 180 mg/dl). This requires $x\_t$ to be chosen judiciously based on current $c\_t$. Patients rely on a method called cho counting to calculate $c\_t$. Often times, this method is prone errors [D]. The reported cho intake $\zeta\_t$ can differ significantly from $c\_t$. In order to use DRO, one needs to identify a range of plausible distributions for cho calculation errors, which is hard calculate and interpret. On the other hand, specifying $\tau$ corresponds to defining an interval of safe pbg values around $K$ (e.g., pbg $=$ 125 mg/dl) that one is content with, which is in line with the standard clinical practice [E]. We will move this experiment from the supplemental document to the main paper. In addition, in response to the comments of reviewer Gzxd and KF6g, we performed new experiments that compare accumulated rewards of RoBOS and other competing benchmarks. Moreover, we also carried out simulations that shows sensitivity of the results to the aspiration level $\tau$ set by the learner. These results can be found in the response pdf.
- **(...$\tau$ expressed as a percentage of SO solution...)** The interpretation that $\tau$ can be chosen as a percentage of the stochastic optimization problem $Z\_t$ pertains to the case when $f$ is known. We discussed this under the optimization objective subsection before the regret definitions. We will clarify this issue in the revised version. Nevertheless, in learning problems in which $f$ is unknown but the value of $Z\_t$ is known, this interpretation can still be used.
- **(...an alternative way of setting $\tau$ when $f$ is unknown...)** For instance, with the power of Bayesian modelling $f$ can be estimated with confidence bounds which can be used to pick a meaningful $\tau$ value. For example, picking $\tau \geq \max\_{\mathcal{X}\times\mathcal{C}} \text{ucb}\_t(x,c)$ almost certainly sets us to failure ($\tau$ not achievable with high probability). In contrast, picking $\tau \leq \langle w\_t, \text{lcb}^t\_{\hat{x}'\_t}\rangle$, where $\hat{x}'\_t := \arg\max\_{x \in {\cal X}} \langle w\_t, \text{lcb}^t\_x\rangle$, guarantees with probability at least $1-\delta$ that the optimization problem (1) is feasible.
- **(...regret bounds for time-varying $\tau$...)** By going over the original proofs, it can be verified that indeed $\tau$ can be selected dynamically, $\tau\_t$, $t \geq 1$. To see this, we also change the thresholds in our regret definitions to $\tau\_t$, and update Assumption 1 such that $\tau\_t \leq \langle w\_t, f\_{\hat{x}\_t}\rangle$, $t \in [T]$, all derived regret bounds still hold.
- **(...experiments...)** The real-world experiment is taken to the main paper with the addition of WRBO and SO benchmarks. During the response phase, we also performed new experiments including comparing cumulative rewards of different algorithms on the true distributions and sensitivity analysis w.r.t. $\tau$. These can be found in the response pdf.
#### Clarity
1) We thank the reviewer for their suggestion and we note that we have slightly updated the definitions to increase clarity. As you suggested, $\kappa\_{\tau,t}$ can be computed in two steps. We wrote the first equation to highlight robust satisficing as a single optimization problem.
In RS, the objective is to find $x^*\_t \in \mathcal{X}$ that solves in each round $t$
\begin{equation}
\kappa\_{\tau,t} = \min k
~ \text{s.t.} ~ \mathbb{E}\_{c\sim P}[f(x,c)] \geq \tau - k \Delta(P,P\_t), ~ \forall P \in {\cal P}\_0 ~ , x \in {\cal X}, ~ k \geq 0 ~ . \tag{1}
\end{equation}
To find $x^*\_t$, we can first compute the *fragility* of $x \in {\cal X}$ as
\begin{equation*}
\kappa\_{\tau,t}(x) = \min k ~ \text{s.t.} ~ \mathbb{E}\_{c\sim P}[f(x,c)] \geq \tau - k \Delta(P,P\_t), ~ \forall P \in {\cal P}\_0, ~ k \geq 0 ~ .
\end{equation*}
The robust satisficing objective is feasible when (1) has a solution.
2) Colorbars added where necessary.
3) Fixed.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will be keeping my score as it is. | Summary: This paper studies a contextual Bayesian optimization problem when the true and reference distributions of the context can be different due to distribution shifts. The authors propose an algorithm called robust Bayesian satisficing algorithm (RoBOS) based on the idea of robust saisificing (RS). Through theoretical analysis and empirical results, the authors demonstrate their results on two notions of regret: lenient regret and robust satisficing regret. For the theoretical part, the authors have a thorough analysis and show that RoBOS achieves with high probability $\tilde{\mathcal{O}}(\gamma_T \sqrt{T})$ robust satisficing regret and $\tilde{\mathcal{O}}(\gamma_T \sqrt{T} + \sum_{t=1}^T \epsilon_t)$ lenient regret, where $\gamma_T$ is the maximum information gain and $\epsilon_t$ is the amount of distribution shift in round $t$. For the empirical part, the authors propose two synthetic benchmarks and one real-world benchmark, and for all cases, they demonstrate RoBOS outperforms distributionally robust BO (DRBO).
Strengths: Distribution shift is an important challenge in Bayesian Optimization, and it is great authors propose a new algorithm that attempts to address these issues, it also brings in some new interesting future research questions. The proposed question and results are new, to my best knowledge. I also like the algorithm, which naturally combines the empirical fragility and common algorithms in the contextual Bayesian optimization setting. The paper is well-written in most parts except for a few minor parts.
Weaknesses: My biggest concern is the two regrets defined in the paper: the lenient regret and the robust satisficing regret, which depends explicitly on the threshold $\tau$. In particular, previous papers in robust satisficing [1] also did the analysis using common performance measures such as average performance. The authors also lack justification for their proposed new notion: robust satisficing regret, beyond matching the definition of robust satisficing which might favor RoBOS. In addition, think the theoretical analysis employed is standard in the literature.
I think the following issues need to be addressed:
1. Could you also demonstrate (theoretically or empirically) the performance of RoBOS if the goal is to maximize the reward?
2. How sensitive is the choice of $\epsilon$ in DRBO/ WRBO versus the choice of $\tau$ in RoBOS?
In the literature, e.g., Figure 1 of [1], they measured the performance of the robust satisficing model over a sequence of $\tau$ and solve the distributional robust optimization model over a sequence of radius of $r$, then they compared the efficient frontier on both average performance and CVaR. This might be a way to further justify the effectiveness of RoBOS.
Reference:
[1] Daniel Zhuoyu Long, Melvyn Sim, and Minglong Zhou. Robust satisficing. Operations Research, 71(1):61–82, 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: There are a few places where terms are not defined in the statement, or they are confusing:
1. In lines 102 and 103, can you be more specific on the difference between $\kappa_{\tau,t}$ and $\kappa_{\tau,t}(x)$? In particular, in both optimization problems, what are the decision variables, and what is fixed?
2. In Lemma 1 (line 155), the term $\gamma_{t-1}$ is not defined, the term appears in line 185, which should be moved earlier.
3. The definition of $B’$ should be moved from line 227 to the statement of Theorem 4.
4. In Figure 3, the first line, should be benchmark “2”.
Questions for the empirical results:
1. In Figure 1, why $\tau = Z_0/2$?
2. In line 256, the true distribution is picked randomly from the set $\mathcal{U}_t$, can you be more specific on this?
3. In Figure 3, the latter 2 plots seem incomplete. For lenient regret, why does the plot only plot for $t \le 100$? For robust satisficing regret, only RoBOS is complete.
4. In both synthetic benchmarks, could you justify the choice of $\tau$? In particular, are the experimental results consistent for a wide range of $\tau$?
Questions for the theoretical results:
1. Page 10 of the appendix, line 288: is the reward function $r(t)=-|o(t-K)_+|$?
2. How is Eq (29) in line 323 derived? Why can we set $B=1$ in Lemma 5? It might be good to be more specific here.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors stated their proposed method RoBOS may be worse than DRBO if the goal is to maximize the reward, which is the main limitation of their work. The other potential limitation I think is the robust satisficing model might be sensitive to the choice of $\tau$.
I suggest the author also analyze their results in other performance measures and see how sensitive their results are with respect to the choice of $\tau$.
There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and valuable insights.
### Weaknesses
- **(...justification for robust satisficing regret)**
Robust satisficing regret evaluates how our algorithm fares against the robust satisficing action. In particular, the true robust satisficing action $x^*\_t$ achieves
\begin{equation*}
\mathbb{E}\_{c\sim P^*\_t}[f(x^*\_t,c)] \geq \tau - \kappa\_{\tau,t} \Delta(P^*\_t,P\_t) ~.
\end{equation*}
Robust satisficing regret, evaluated at each round, measures the expected reward of the action chosen by our algorithm against the guaranteed reward of $x^*\_t$. When the selected action meets or exceeds this guaranteed reward, no regret is accumulated. However, if the chosen action falls short of this guarantee, then regret is accrued. Based on the above discussion, we think that robust satisficing regret is the right choice when evaluating the loss of the learner w.r.t. robust satisficing benchmark. That being said, we agree with the reviewer about the importance of investigating the cumulative reward performance of RoBOS. Even though our goal is to satisfice the aspiration goal, it is still meaningful to analyse the cumulative reward. For instance, Figure 1 of the main paper shows a scenario in which the robust satisficing solution attains the optimal reward at the true distribution $P^*\_t$, while DRO with imprecise uncertainty sets cannot do so. Since RoBOS converges to the robust satisficing solution, in this example RoBOS is expected to attain higher cumulative reward than DRBO.
- **(...performance of RoBOS in reward maximization...)**
During the response phase, we compared RoBOS and DRBO in terms of their cumulative rewards. The results can be found in the response pdf.
- **(...sensitivity to $\tau$...)**
During the response phase, we compared RoBOS and DRBO in terms of their sensitivity to the choice of $\tau$ and $\epsilon$. The results can be found in the response pdf.
### Questions
#### Definitions
1) $\kappa\_{\tau,t}$ and $\kappa\_{\tau,t}(x)$ can be defined as following. In RS, the objective is to find $x^*\_t \in \mathcal{X}$ that solves in each round $t$
\begin{equation*}
\kappa\_{\tau,t} = \min k ~ \text{s.t.} ~ \mathbb{E}\_{c\sim P}[f(x,c)] \geq \tau - k \Delta(P,P\_t), ~ \forall P \in {\cal P}\_0 ~ , x \in {\cal X}, ~ k \geq 0 ~ .
\end{equation*}
To find $x^*\_t$, we can first compute the {\em fragility} of $x \in {\cal X}$ as
\begin{equation*}
\kappa\_{\tau,t}(x) = \min k ~ \text{s.t.} ~ \mathbb{E}\_{c\sim P}[f(x,c)] \geq \tau - k \Delta(P,P\_t), ~ \forall P \in {\cal P}\_0, ~ k \geq 0 ~ .
\end{equation*}
In the first problem $\tau$ and $P\_t$ are fixed while the decision variable is $x\in\mathcal{X}$. In the second problem, $x$, $\tau$ and $P\_t$ are fixed and we compute the minimum $k$ that satisfies the constraints.
2) Fixed.
3) Fixed.
4) Fixed.
#### Empirical
1) Figure 1 illustrates a proof of concept example where $\tau$ is chosen to be achievable for only one action, highlighted by the green rhombus.
2) In the second synthetic experiment we pick the true distribution $P^*\_t \sim \mathcal{N}(\mu\_t + N\_t, 5^2)$ where $\mu\_t$ is the mean of the reference distribution $P\_t$ and $N\_t\sim \mathcal{U}(-6,6)$ is set as to make sure the distributional shift is not greater than $\epsilon$.
3) We appreciate your careful review of our work. However, we'd like to clarify that all our plots do extend until $t=200$. It's possible that there may have been some technical issues with the visualization during the pdf loading process. We recommend viewing the file in a different pdf reader. We apologize for any inconvenience caused and we are here to assist if further clarifications are needed.
4) In the first proof of concept synthetic experiment, $\tau$ is chosen to be achievable only by one of the actions. In the second synthetic experiment, $\tau$ was selected as half of the function maximum arbitrarily. Further, a discussion about how to select $\tau$ in a real-world setting is given in our global response. We have made revisions in response to your comments and have conducted an additional experiment to analyse RoBOS' sensitivity to $\tau$. We also compare this with DRBO's sensitivity to the choice of $\epsilon$. These results are included within the response pdf.
#### Theoretical
1) The pseudo reward function we defined for the insulin dosage allocation experiment is $-|o(t) - K|$. Here $o(t)$ is the blood glucose level of the patient, measured 150 minutes after their meal, and $K=112.5$ mg/dl is the target blood glucose level. Our pseudo reward function measures the discrepency in patients blood glucose compared to the target level.
2) In the mentioned equation $\langle w^*\_t, \sigma\_t(x\_t,\cdot)^2\rangle$ corresponds to $m\_t$ in Lemma 5. Hence $S\_t$ correspods to $\sigma\_t(x\_t, c\_t)^2$, $c\_t\sim P^*\_t$. Due to our assumption on the kernel that $\sigma(x\_t,c\_t) \leq 1$, $B$ in Lemma 5 can be set to 1.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response that addresses my concerns.
I changed my rating to 6. | Summary: This paper studies robust satisficing in contextual Bayesian optimization under distribution shift in the distributions of the context.
They show that under some assumptions their algorithm achieves sublinear lenient regret and under some relaxed assumptions they achieve sublinear robust satisficing regret.
They compare their method agains distributionally robust optimization approach [10] for contextual Bayesian optimization.
Strengths: I have my reservations against points that I mentioned below, but I believe this research direction is valuable and interesting.
Weaknesses: The writing of the paper can be improved. The clarity and coherence of the writing could be enhanced. There are instances where the ideas are not effectively communicated, leading to confusion for the reader. Furthermore, the structure of the paper could benefit from more logical organization and smoother transitions between sections. Additionally, some grammatical and punctuation errors need to be addressed, as they can detract from the overall quality of the paper. A revision focusing on refining the writing style and ensuring a more polished presentation would greatly strengthen the paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) Why did you choose to use MMD distance to measure the discrepancy between distributions.
2) In Table 1, there must be a space between $\min$ and $k$ for RS objective.
3) I do not fully understand why one would prefer RS over DRO. Can you please clarify this point? When it would make sense to use RO, RS and DRO?
4) Line 141: Please provide references that GP assumption is common.
5) Lemma 1: Where does the det operator appear appear in Lemma 1? Why is it introduced there? What is $\gamma_{t-1}$? What is $e$?
6) If LCB is never used, why it is introduced?
7) How restrictive the regularity assumption on $f$?
8) Optimism in the face of uncertainty is a well-known concept but I am not sure how it is motivated in Section 3 and how your model is optimistic. Can you please elaborate on this?
9) For the fair comparison of methods: To make a fair comparison between DRBO and your method you should consider robust regret introduced in [10].
10) Is $\tau$ fixed throughout the algorithm? Referring to footnote at Page 7, I understand that you claim $\tau$ can be selected adaptively but I think it must be selected adaptively otherwise $\hat \kappa_{\tau, t}(x)$ can be infinity, right?
11) Can you please intuitively explain what does the model in (1) satisfy and what is it robust against? Why $\tau$ can be explained as $Z_t?
12) I find it unnatural that the regret bound in Theorem 3, is not dependent on $\tau$. I understand it is because $\tau$ appears also in the RS regret definition. I think a more natural approach would be separating these two $\tau$'s and seeing that the regret only depends on the difference of these two $\tau$'s. (similar argument goes for the lenient regret) Can you also show the regret guarantee on the robust regret as defined in [10]?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and valuable insights.
### Weaknesses
**(...refining the writing style and ensuring a more polished presentation...)**
We have undertaken a thorough revision to address the concerns raised. Specifically, we've refined the writing for better clarity, restructured the content for a more logical flow, and ensured smoother transitions between sections.
### Questions
1) MMD distance, defined using a kernel function, is a prevalent measure for distributional shifts in RKHS [A, B, 10]. Its overlap with kernel-based methods is notable, and with kernels like RBF and Matérn-$\nu$, MMD acts as a metric. Its ease of computation and clear interpretation aids theoretical analysis.
2) Fixed.
3) We kindly refer the reviewer to our global response.
4) Some of the work that utilizes GP's in the RKHS setting are [1, 10, 31] which are also referred to in the main paper. Now we also cite them in the GP assumptions section.
5) There was a writing error, now fixed and confidence bounds updated to a more recent version. In the final version $\gamma\_t$ (the maximum information gain) is defined in its proper place.
6) Fixed.
7) The assumption that $f$ belongs to an RKHS with bounded Hilbert norm is a very common one in the literature [1, 31]. The RKHS assumption induces smoothness conditions
\begin{equation*}
|f(x) - f(y)| = |\langle f, k(x,\cdot) - k(y,\cdot) \rangle| \leq \lVert f\rVert\_{\mathcal{H}} \lVert k(x,\cdot) - k(y,\cdot)\rVert
\end{equation*}
by Cauchy-Schwarz inequality.
8) RoBOS's optimism arises from its use of the UCB of the objective function, similar to algorithms like GP-UCB, ensuring a balance between exploration and exploitation. Lemma 2 emphasizes this: with a probability of at least $1-\delta$, we have $\hat{\kappa}\_{\tau,t}(x) \leq \kappa\_{\tau,t}(x)$ for all $x\in\cal X$ and $t \geq 1$. This indicates that the algorithm's estimated fragility is always less than or equal to the actual fragility, confirming RoBOS's optimistic nature.
9) In the response period, we conducted experiments comparing DRBO and RoBOS on the robust regret from [10], figures given in the response pdf. Our findings show that RoBOS, even with linear robust regret, can outperform DRBO in cumulative reward. This occurs when the ambiguity set isn't a tight representation of the distributional shift, leading the DRO solution DRBO converges to, to be suboptimal under the true distribution.
10) $\tau$ can indeed be fixed or time-varying, note that our algorithm works even when the $\hat{\kappa}\_{\tau,t}(x)$ is infinite for some $x\in \mathcal{X}$. We only require that the robust satisficing problem is feasible, meaning the following optimization objective has a solution. Find $x^*\_t \in \mathcal{X}$ that solves in each round $t$
\begin{equation}
\kappa\_{\tau,t} = \min k ~ \text{s.t.} ~ \mathbb{E}\_{c\sim P}[f(x,c)] \geq \tau - k \Delta(P,P\_t), ~\forall P \in {\cal P}\_0 ~, x \in {\cal X}, ~k \geq 0 ~. \tag{1}
\end{equation}
which is feasible under $\tau \leq \mathbb{E}\_{c\sim P\_t}[f\_{\hat{x}\_t}]$, for all $t\in [T]$, where $\hat{x}\_t := \arg\max\_{x \in {\cal X}} \langle w\_t, f\_x \rangle$. If this assumption does not hold in round $t$, then the robust satisficing problem is infeasible, which means that $\kappa\_{\tau,t} = \infty$ and there is no robust satisficing solution. Therefore, measuring the regret in such a round will be meaningless. In practice, if the learner is flexible about its aspiration level, this assumption can be relaxed by dynamically selecting $\tau$ at each round to be less than $\langle w\_t, \text{lcb}^t\_{\hat{x}'\_t} \rangle$, where $\hat{x}'\_t := \arg\max\_{x \in {\cal X}} \langle w\_t, \text{lcb}^t\_x \rangle$. The optimism principle ensures that if the primary problem (1) is feasible, the problem with estimated fragility $\hat{\kappa}{\tau,t}$ remains so. Given feasibility, an expert can set the satisficing goal $\tau$, see our example on diabetes.
11) The model in (1) tries to find a solution that achieves the desired threshold $\tau$, i.e.\ satisfice. The model picks the solution that has the lowest fragility $\kappa\_{\tau,t}(x)$. The fragility can be viewed as the minimum rate of suboptimality one can obtain with respect to the threshold, per unit of distribution shift from $P\_t$. Since the fragility is calculated over all possible context distributions $\forall P \in {\cal P}\_0$, the model gives robustness guarantees under all possible distributional shifts. In particular, the true robust satisficing action $x^*\_t$ achieves
\begin{align*}
\mathbb{E}\_{c\sim P^*\_t}[f(x^*\_t,c)] \geq \tau - \kappa\_{\tau,t} \Delta(P^*\_t,P\_t) ~
\end{align*}
under the true distribution $P^*\_t$, no matter what the distributional shift is. $\tau$ can be explained as a percentage of $Z\_t$ when $f$ is known (not in the learning problem, where $f$ is unknown). Only knowing $Z\_t$ (but not $f$ or optimal $x$), the decision-maker can set $\tau$ as the percentage of $Z\_t$ that it is content with. This is just one example of how $\tau$ can be selected. See our response to your previous comment for another example.
12) The $\tau$ value found within our regret definitions is identical to the $\tau$ utilized as an input to our algorithm. It represents the aspiration level that our algorithm strives to reach, and forms the benchmark against which we define our regret notions. Its absence in the regret bound is due to its direct link with the objective function $f$, assuming $\tau \leq \mathbb{E}\_{c\sim P\_t}[f\_{\hat{x}\_t}]$.
---
Rebuttal Comment 1.1:
Title: Response to the author rebuttal
Comment: I thank the authors for their detailed response. I have some further comments that I realize when I was going over the paper.
* In Figure 1, is there a typo in the definition of $Z_0 = \mathbb E_{c \sim P_t}$?
* I am still having hard time understanding why $\tau$ can be explained as a percentage of $Z_t$ when $f$ is known.
* I do not see that the assumption $\tau\leq \mathbb E_{c \sim \mathbb P_t}[f_{\hat x_t}]$ in the statement of Theorem 3. In Line 190, it is stated that Theorem 3 holds for any threshold $\tau$. Can you please clarify these points?
Thank you!
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their careful reading of our paper and the new comments.
1. Yes, $Z\_0$ should be $Z\_t = \max\_{x \in {\cal X}} \mathbb{E}\_{c \sim P\_t} [f(x,c)]$. We will fix this typo.
2. The robust satisficing optimization problem (given after line 102) at round $t$ is feasible when $\tau <= Z\_t$. When $\tau > Z\_t$, the problem is infeasible, and there is no robust satisficing action $x^*\_t$. Since the learner receives the reference distribution $P\_t$ at the beginning of each round, when $f$ is known, $Z\_t$ can be computed exactly by the learner. Since the true distribution $P^*\_t$ and the amount of distribution shift is unknown, the optimal action (with expected reward $Z\_t$) under $P\_t$ can be far from optimal under $P^*\_t$. To protect against distribution shifts, the learner can solve the robust satisficing problem. For instance, if the learner is content with receiving 90% of $Z\_t$ under no distribution shift, it can set $\tau = 0.9 Z\_t$ and solve for $x^*\_t$. Now, $x^*\_t$ will offer an expected reward that is at least $0.9 Z\_t$ if $P^*\_t = P\_t$. If $P^*\_t \neq P\_t$, $x^*\_t$ will offer an expected reward that is at least $0.9 Z\_t - \kappa\_{\tau,t} \Delta(P^*\_t, P\_t)$. As an example of why the learner can be content with 90% of $Z\_t$, see our diabetes example in the general response and classifier example given in response to Reviewer gjzH.
One may also wonder what will change if the learner sets $\tau' = 0.95 Z\_t$. In this case, the fragility under $\tau'$, i.e., $\kappa\_{\tau',t}$ can be higher than $\kappa\_{\tau,t}$, which will result in diminished expected reward guarantees under large distribution shifts.
3. Thanks for noticing this. As we mentioned in our response above, when $\tau > \mathbb{E}\_{c \sim P\_t} [f\_{\hat{x}\_t}] = Z\_t$, the robust satisficing problem is not feasible, i.e., $\kappa\_{\tau,t} = \infty$. If this is the case, by looking at (3), one can say that by convention, the regret in round $t$ is $0$ independent of the chosen action $x\_t$. Since there is no robust satisficing action $x^*\_t$ in round $t$, there is no way we can evaluate the loss of the learner with respect to the robust satisficing action in round $t$.
So, by definition, when $\tau > Z\_t$, the regret will be $0$ and the regret bounds will still hold.
To improve the clarity of the paper, we will explain this in the statement of Theorem 3 in the revised paper.
We hope that our response above has clarified your concerns. If you have any other comments, we will happily address them. | Summary: I think the main contributions of this paper are as follows:
- Proposes a new decision-making framework called robust Bayesian satisficing (RBS) which combines robust satisficing with Bayesian optimization. RBS aims to achieve a satisfactory solution under distributional shifts by observing a predefined satisfactory threshold. This is different from distributionally robust optimization which requires an ambiguity set and stochastic optimization which optimizes for a given reference distribution.
- Defines two regret measures to evaluate the performance of RBS algorithms: lenient regret and robust satisficing regret. Lenient regret measures the cumulative loss of an algorithm's chosen actions with respect to an aspiration level. Robust satisficing regret measures the loss with respect to the robust satisficing benchmark which is the aspiration level minus the fragility (a measure of suboptimality per unit distribution shift). The paper shows the connection between these two regret measures.
- Proposes an RBS algorithm called Robust Bayesian Optimistic Satisficing (RoBOS) which uses Gaussian processes to model the objective function. RoBOS only requires an aspiration level as input and does not need an ambiguity set. RoBOS chooses actions to minimize the estimated fragility which is an optimistic estimate of the true fragility.
- Proves that RoBOS achieves sublinear robust satisficing regret and lenient regret under certain assumptions. The regret bounds depend on the maximum information gain and the sum of distribution shifts.
- Demonstrates the effectiveness of RoBOS on synthetic problems and compares it with other robust Bayesian optimization algorithms.
Strengths: Robust Bayesian satisficing is a novel framework that combines robustness to distributional shifts with satisficing behavior. This provides an alternative to existing paradigms like distributionally robust optimization and stochastic optimization. RBS does not require precise knowledge of the ambiguity set and can handle unknown distribution shifts. Also, the paper provides theoretical guarantees on the regret of RoBOS under some assumptions. The robust satisficing regret and lenient regret of RoBOS grow sublinearly with time. The regret bounds show the dependence on maximum information gain and the sum of distribution shifts, providing insight into how RoBOS handles distributional shifts. In addition, the experimental results verify the theoretical findings on the sublinearity of the regret bounds. Finally, the paper is well-written, clear, and easy to follow.
Weaknesses: - Some assumptions seem to be strong for this problem. For example, the bounds require the sum of distribution shifts to be sublinear in time which may not always hold in practice. It would be good to discuss how the algorithm behaves when these assumptions are violated.
- The experiments are limited to synthetic problems. It would be good to evaluate RoBOS on some real-world benchmark problems to demonstrate its effectiveness in practical settings. Comparisons with more algorithms on these problems would also strengthen the experimental evaluation. There are many datasets for distributional shifts that the author can leverage to verify their algorithm.
- Although as defined in the conclusion as a future direction, it would be beneficial to discuss the effects of continuous context in this problem, since the focus is on deterministic contexts in this paper. Analyzing RoBOS when contexts are stochastically generated would provide greater insight into how it handles uncertainty.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In addition to the previous section:
I am curious how this approach can be utilized for test-time distribution shifts and adaptations.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: They covered this part very well in my view.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments and valuable insights.
### Weaknesses
- **(Some assumptions...)**
Indeed to achieve sublinear lenient regret, some assumptions have to be made on the distribution shift. Nothing much can be said in an adversarial environment where the reference distribution $P\_t$ is judiciously chosen away from the true distribution $P^*\_t$ in each round. However, one interesting practical setting is the case where the reference distribution is the empirical distribution. This corresponds to data-driven optimization setting discussed in [10]. For this case, $P^*\_t = P^*$ is fixed for $t \in [T]$, $P\_1$ is the uniform distribution over the contexts, and $P\_t = \sum\_{s=1}^{t-1} \delta\_{c\_s}$, $t>1$ is the empirical distribution of the observed contexts, where $\delta\_x$ is the Dirac measure defined for $c \in {\cal C}$ such that $\delta\_c(A) = 1$ if $c \in A$ and $0$ otherwise. Under this setting, one can put a probabilistic bound on the amount of distributional shift $\epsilon\_t$, which can be used to bound the regret similarly to the proof of [10, Corollary 4]. We are able to show that the lenient regret of RoBOS when run with $\beta (\delta/3)$ is bounded with probability at least $1-\delta$ by
\begin{equation*}
R^{\textit{l}}\_T \leq 4 \beta\_T \sqrt{T\left( 2 \gamma\_T + 2\log \left(\frac{12}{\delta}\right)\right)} + B' \epsilon\_1 + B' 2 \sqrt{T} \left(2 + \sqrt{2 \log \left( \frac{\pi^2 T^2}{2\delta} \right)} \right) ~.
\end{equation*}
- **(The experiments are limited...)** We note that we included a real-world experiment in our supplemental document, now moved to the main paper, concerning insulin dosage allocation for Type 1 diabetes patients. For more detail about the experiment, we kindly refer the reviewer to our response to reviewer KF6g's first question.
- **(...discuss the effects of continuous context...)**
In the original paper, we assume that at each round $t\in[T]$, contexts are drawn stochastically from the true distribution $P^*\_t$ over a finite context set. We studied the continuous context case in the response phase, and observed that our regret bounds hold with only minor differences in the regret analysis. The proof for the continuous case closely follows the discrete case, however, instead of expressing the expectations as an inner product (e.g., $\langle w\_t, f\_x \rangle$), we work with the general representation (e.g., $\mathbb{E}\_{c\sim P\_t}[f\_x]$). Due to linearity of expectations, our derivations continue to hold. Also in the analysis of the lenient regret, in equation (37), instead of using the Cauchy-Schwarz inequality, we use the definition of the MMD distance to bound the term $\mathbb{E}\_{c \sim P\_t}[f(\hat{x}\_t, c)] - \mathbb{E}\_{c \sim \bar{P}^t\_{\hat{x}\_t}} [f(\hat{x}\_t,c)] \leq B \Delta(P\_t, \bar{P}^t\_{\hat{x}\_t})$, where $B$ is an upper bound on the RKHS norm of $f$ and $\Delta(\cdot,\cdot)$ is the MMD distance.
### Questions
**(...test-time distribution shifts...)**
Indeed a promising area of application is the test-time distributional shifts. RoBOS presents itself as a possible solution to this problem especially when the test-time distributional shift is difficult to foresee. Many engineering products are designed to be deployed in unpredictable environments with multiple contingent factors. The safety guarantees of RoBOS with respect to all possible distributional shifts may give it an edge in such environments. For instance, when $P\_t$ is the training distribution, one can train a model to achieve the maximum accuracy, i.e., $\hat{x}\_t = \arg\max\_{x \in {\cal X}} \mathbb{E}\_{c \sim P\_t}[f(x,c)]$. However, this will be of no use when the test distribution $P^*\_t$ is significantly different from $P\_t$. Instead, with robust satisficing, one seeks to achieve an accuracy that we are content with, e.g., $\tau=0.9$ (represents $90\\%$ accuracy). Then, the robust satisficing solution guarantees at test time, an accuracy that is at least $\tau - \kappa\_{\tau,t} \Delta(P^*\_t, P\_t)$.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing a detailed response. I have read the responses and other reviews, and still keep my score. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their careful reading of our paper and constructive comments. Here we address the common questions raised by the reviewers.
- **(Real-world experiments and how to select $\tau$)** One real-world experiment concerning safe dose allocation for diabetes patients is presented in the supplementary document. Type 1 Diabetes Mellitus (T1DM) patients require bolus insulin doses (id) after meals for postprandial blood glucose (pbg) regulation. One of the most important factors that affect pbg is meal carbohydrate (cho) intake [C]. Let ${\cal X}$ and ${\cal C}$ represent admissible id and cho values. For $x \in {\cal X}$, $c \in {\cal C}$, let $g(x,c)$ represent the corresponding (expected) bpg value. Function $g$ depends on the patient's characteristics and can be regarded as unknown. The main goal of pbg regulation is to keep pbg close to a target level $K$ in order to prevent two potentially life-threatening events called hypoglycemia (e.g., pbg $<$ 70 mg/dl) and hyperglycemia (e.g., pbg $>$ 180 mg/dl). This requires $x\_t$ to be chosen judiciously based on current $c\_t$. Patients rely on a method called cho counting to calculate $c\_t$. Often times, this method is prone errors [D]. The reported cho intake $\zeta\_t$ can differ significantly from $c\_t$. In order to use DRO, one needs to identify a range of plausible distributions for cho calculation errors, which is hard calculate and interpret. On the other hand, specifying $\tau$ corresponds to defining an interval of safe pbg values around $K$ (e.g., pbg $=$ 125 mg/dl) that one is content with, which is in line with the standard clinical practice [E]. We will move this experiment from the supplemental document to the main paper. In addition, in response to the comments of reviewer Gzxd and KF6g, we performed new experiments that compare accumulated rewards of RoBOS and other competing benchmarks. Moreover, we also carried out simulations that shows sensitivity of the results to the aspiration level $\tau$ set by the learner. These results can be found in the response pdf.
- **(Advantages of RS over DRO)** RS has multiple advantages over DRO depending on the problem setup. The main difference is that it considers the goal of satisficing rather than optimizing. Satisficing is argued to be normatively better than optimizing by many decision theorists and philosophers [16, 17, 18], especially under settings of deep uncertainty. Once the normative shift is made from optimizing to satisficing, a reasonable approach for decision-making is to maximize the robustness of the action to the uncertainties of the environment. RS achieves this by selecting the action that achieves the desired threshold under all possible context distributions the environment can produce. When no such action is present, RS picks the action that achieves the threshold under the biggest set of context distributions and minimizes the margin of error with respect to the threshold. Consider the case where $\tau = Z\_r := \max\_{x \in \mathcal{X}}\min\_{P\in\mathcal{U}(r)} \mathbb{E}\_{c\sim P} [f(x,c)] $, where $\mathcal{U}(r)$ is the ambiguity ball of radius $r$ centered at $P\_t$. For this setting DRO solution implies
\begin{align*}
&\tau - \mathbb{E}\_{c\sim P} [f(x\_{\text{DRO}},c)] \leq 0 \qquad \forall P \in \mathcal{U}(r) \\\\
&\tau - \mathbb{E}\_{c\sim P} [f(x\_{\text{DRO}},c)] \leq \infty \qquad \forall P \in {\cal P}\_0 \setminus \mathcal{U}(r)
\end{align*}
whereas the RS solution implies
\begin{align*}
&\tau - \mathbb{E}\_{c\sim P} [f(x\_{\text{RS}},c)] \leq \kappa\_\tau \Delta(P,P\_t) \qquad \forall P \in \mathcal{U}(r) \\\\
&\tau - \mathbb{E}\_{c\sim P} [f(x\_{\text{RS}},c)] \leq \kappa\_\tau \Delta(P,P\_t) \qquad \forall P \in {\cal P}\_0 \setminus \mathcal{U}(r)
\end{align*}
Algorithmically, RoBOS needs less information than DRBO since unlike DRBO, RoBOS does not utilize an ambiguity set for which the true distribution is assumed to be in. It is shown in the experiments that when the ambiguity ball is not selected correctly, DRBO solutions can be suboptimal or even disastrous w.r.t. the satisficing goals. In contrast, it would be better to use DRO if we have a tight and trustable ambiguity set and the goal is optimizing rather than satisficing. In addition, in some applications such as blood glucose regulation, RS is more interpretable than DRO (see our previous response).
### References
- [A] Gretton et al., A kernel two-sample test. 2012.
- [B] Sejdinovic et al., Equivalence of distance-based and rkhs-based statistics in hypothesis testing. 2013.
- [C] Walsh et al., Guidelines for optimal bolus calculator settings in adults. 2011
- [D] Kawamura et al., The factors affecting on estimation of carbohydrate content of meals in carbohydrate counting. 2015.
- [E] Kahanowitz et al., Type 1 diabetes–a clinical perspective. 2017.
Pdf: /pdf/7554e65ca7ce54d7ba7b399bf6a07979ab92fc90.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Towards Better Dynamic Graph Learning: New Architecture and Unified Library | Accept (poster) | Summary: This paper studies the problem of continuous-time dynamic graph learning. The authors proposed a Transformer-based architecture DyGFormer to learn the dynamic edge representation, which mainly consists of a neighbor co-occurrence encoding scheme to count the co-occurrence of nodes, a patching technique to split the node’s sequence into multiple patches, and a Transformer as the backbone. The authors also propose a dynamic graph learning library DyGLib, and re-report the important baselines’ performance based on DyGLib. Experimental results conducted on 13 benchmark datasets verify the effectiveness of the proposed method.
Strengths: 1. The authors propose a co-occurrence encoding scheme and a patching technique to capture the correlation between sequences and the long histories of sequences respectively. As far as I know, it is less studied than learning the dynamic link representations direct from the sequences of nodes’ first-hop interactions under the dynamic graph learning problem, especially how to model the long histories of the interaction sequence.
2. The authors conduct extensive experiments on 13 benchmark datasets to verify the effectiveness of the proposed two-component. Besides the proposed method, the authors also re-implemented the important baselines in the area, and point out that some findings of baselines are not in line with previous reports because of their varied pipelines and problematic implementations, which may benefit future research in the continuous-time dynamic graph learning community.
3. The authors propose a continuous-time dynamic graph learning problem. This library provides a unified pipeline to train and evaluate different baselines. I think this library can provide a tool for researchers in the community to conveniently evaluate different methods in a unified setting.
Weaknesses: 1. In the problem formalization of section 3, the authors define the target of the dynamic graph learning problem as learning the time-aware representations for each node. But the DyGFormer actually learns the dynamic link representations but not the node representations. Or more accurately, the DyGFormer learns the contextual node representation $h^t_{u|(u,v)}$ but not $h^t_u$, where the node representation $h^t_{u|(u,v)}$ relies on the other node v. It will be better to add some discussion for this in the paper.
2. In section 5.2, the authors say that “some of our findings of baselines differ from previous reports”, but the authors do not point out which baselines’ performance improves. I suggest the authors explicitly list which baselines’ performance improves.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please refer to the weaknesses part.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive reviews. In response to your comments, we have clarified the relationships between the learned representations of links and nodes. We have also specified the baselines whose observations are different from previous reports. We would be pleased to explain more if further discussions are required.
**W1: It will be better to add some discussions for the learning of dynamic link representations but not node representations.**
For dynamic graph learning, the status of each node is usually affected by its interactions with other nodes. Therefore, it is reasonable to learn the dynamic node representations based on their links. More concretely, for the dynamic link prediction task, future links of nodes are naturally influenced by their historical interactions. This allows us to learn from links and derive time-aware representations of source and destination nodes based on the link. For the node classification task, interactions may lead to the state change of nodes (e.g., on Wikipedia and Reddit datasets, the state of a user may change from not banned to banned because of an interaction [1]). Such a phenomenon also indicates that the dynamics of nodes can be well captured when learning from links. It is also worth noticing that many previous works (e.g., JODIE, DyRep, TGAT, TGN, …) learn in the same way as well.
**W2: In section 5.2, the authors say that “some of our findings of baselines differ from previous reports”, but the authors do not point out which baselines’ performance improves. I suggest the authors explicitly list which baselines’ performance improves.**
The differences between our observations and previous reports on baselines are in two main aspects.
- **The performance of some baselines can be significantly improved by properly setting certain hyperparameters.** Let’s take TGAT for transductive dynamic link prediction with the random negative sampling strategy as an example (see Table 1 in our paper). Compared with the performance in [2], the results of AP are significantly improved on datasets like MOOC (from 0.61 to 0.86), LastFM (from 0.50 to 0.73), Enron (from 0.59 to 0.71), Social Evo. (from 0.76 to 0.93), Flights (from 0.89 to 0.94), and Contact (from 0.58 to 0.96). Similar improvements can also be found on JODIE, DrRep, and TGN on several datasets. This is because we perform the grid search to find the best settings of hyperparameters, which effectively improves the performance of baselines;
- **Some methods would obtain worse results after we fix the problems or make adaptions in their implementations.** Let’s take CAWN for transductive dynamic link prediction with the random negative sampling strategy as an example (see Table 1 in our paper). Compared with the results in [2], the performance on the AP metric of CAWN drops sharply on datasets like LastFM (from 0.98 to 0.87), Can. Parl. (from 0.94 to 0.70), US Legis. (from 0.97 to 0.71), UN Trade (from 0.97 to 0.65), and UN Vote (from 0.82 to 0.53). This is because [2] uses the problematic implementations of CAWN to conduct experiments and the results will sometimes become worse after fixing the issues. You can also see Sections B.3 and B.4 in the Appendix for more detailed explanations.
[1] Predicting dynamic embedding trajectory in temporal interaction networks, KDD, 2019.
[2] Towards better evaluation for dynamic link prediction, NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for rebuttal, which addressed my concerns. I believe this work will advance the important dynamic graph learning field, and I am happy to increase my rating.
---
Reply to Comment 1.1.1:
Title: RE: Official Comment by Reviewer zX7A
Comment: Thank you! We appreciate your timely reply and support for our work. | Summary: In this paper, the authors considered the dynamic graph representation learning (a.k.a. dynamic network embedding) problem and proposed a novel transformer-based architecture - DyGFormer, with several original designs (e.g., a neighbor co-occurrence encoding scheme, a patching technique, etc.) Moreover, the authors also implemented a unified continuous-time dynamic graph learning library, which include several baselines, widely-used datasets, and a common evaluation pipeline. Exhaustive experiments with various settings of dynamic link prediction and dynamic node classification were also conducted to validate the effectiveness of the proposed method.
Strengths: (S1) The overall presentation of this paper is clear, which makes it easy to grasp the key ideas.
(S2) There are some original designs (e.g., neighbor co-occurrence encoding scheme and patching of historical neighbor sequence) in the proposed method.
(S3) The authors conducted many experiments on 13 datasets for both dynamic link prediction and dynamic node classification with various settings.
(S4) The authors also implemented a unified library including some typical baselines and a common evaluation pipeline for various task settings (e.g., transductive and inductive dynamic link prediction and node classification), with the code provided for review.
Weaknesses: (W1) Some statements are with too many citations (e.g., 'continuous-time methods [27, 53, 62, 44, 35, 9, 55, 58, 57, 24, 34, 12]', 'graph convolutions [53, 62, 44, 35, 9, 57]', etc,) which are hard to check their sources. Some references are also repeated in successive sentences (e.g., 'only a few libraries are specifically designed for dynamic graphs [18, 45, 71]. DynamicGEM [18] is a library for dynamic graph embedding methods'). It is better to ensure that there are at most 5 references for each statement. Some of the references can also be replaced with concrete examples or methods introduced in this paper. Furthermore, references of some important statements are also missing, e.g., 'unlike most previous methods that need to retrieve nodes' historical interactions from multiple hops' but what do 'previous methods' refer to?
(W2) Some problem statements regarding model optimization are unclear. Concretely, the formal definition of training loss is not given. In general, we can divided existing dynamic graph representation techniques into the task-dependent and task-independent methods, which are respectively trained via (i) supervised losses related to and (ii) unsupervised losses regardless of the downstream task. It is unclear that the proposed method is task-dependent or task-independent. In the baselines, DyRep and TGAT are task-independent while CAW is task-dependent (with a loss designed for dynamic link prediction), as I can check. From my perspective, it is unfair to compare both types of method in a common experiment setting, where task-dependent methods are expected to have better performance than task-independent approaches, due to the incorporation of supervised information related to the downstream task. For task-independent methods, the authors should also clarify the downstream module (e.g., logistic regression, SVM, MLP, etc.) to support a concrete task. More importantly, the evaluation pipeline of a unified library should also cover both the settings.
(W3) Although the authors gave toy examples for some procedures of the proposed method (e.g., the computation of neighbor co-occurrence encoding), several details still need further clarification (e.g., how to pad a derived patch, etc.) It is recommended summarizing the overall procedure to derive all the encoding and patches in terms of pseudo-code (even in supplementary material).
(W4) In related research, inductive dynamic link prediction includes the prediction of links between (i) one old (i.e., previously observed) node and one new (i.e., previously unobserved) node as well as between (ii) two new nodes. It is unclear that the inductive setting in this paper refers to both cases or just the latter case.
(W5) Although the authors included 7 baselines in experiments, most of them are based on time-encoded deep sequential models (e.g., with RNN and attention). In addition, there are some other related approaches based on temporal point process (e.g., HTNE [1] and TREND [2]) and neural ordinary differential equation (e.g., GSNOP [3]) that are not considered in experiments. A unified library should cover various types of method.
- [1] Zuo, Yuan, et al. Embedding temporal network via neighborhood formation. KDD 2018.
- [2] Wen, Zhihao, and Yuan Fang. Trend: Temporal event and node dynamics for graph representation learning. Web Conference 2022.
- [3] Luo, Linhao, Gholamreza Haffari, and Shirui Pan. Graph sequential neural ode process for link prediction on dynamic and sparse graphs. WSDM 2023.
(W6) In supplementary material, although the authors gave some details regarding the datasets, the source information (i.e., where can we download these datasets) is missing. In Table 4 of supplementary material, what does 'duration' means? Does it means the time granularity? Consistent with the statistics of 'duration', what is the total number of time steps for each dataset? From my perspective, the scale of a dynamic graph is related to the (i) number of nodes $N$ and (ii) number of time steps $T$. As I can check, all the datasets are with $N<20,000$, which cannot be considered as large datasets. Can the proposed method be scaled up to larger datasets? Or are there any possible solutions to addressing the scalability issue (perhaps as one future research direction)?
(W7) It seems that the authors upload the paper to ArXiv before the formal submission to NeurIPS. Several month ago, google scholar and github recommended the ArXiv version of this paper to me. As a result, I have already known the names and institutions of the anonymous authors, which breaks the double-blind policy.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See W1-W7.
As claimed before Eq. (3)-(7), the row dimensionality of ${\bf{Z}}^t$ (i.e., the input of transformer encoder) is $(l_u^t + l_v^t)$, which is related to the length of $S_u^t$ and $S_v^t$. Since different nodes $u$ may have $S_u^t$ with different length, it seems that one can only feed the input ${\bf{Z}}^t$ w.r.t. one link $(u, v, t)$ to the transformer encoder every time due to the non-fixed dimensionality of input. Can we simultaneously feed ${\bf{Z}}^t$ w.r.t. multiple (i.e., more than one) links $(u, v, t)$ into the transformer encoder and derive multiple embedding pairs $({\bf{h}}_u^t, {\bf{h}}_v^t)$? If yes, how to derive these embedding paris $({\bf{h}}_u^t, {\bf{h}}_v^t)$? If not, there seems to be an efficiency issue for the proposed method.
Consider an extreme case. For some new (i.e., previously unobserved) nodes without any available attributes and historical neighbors, can the proposed method still be able to support the inductive inference (e.g., dynamic link prediction and node classification) w.r.t. these nodes?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See W1-W7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed comments. We have clarified some presentations and provided detailed settings. We have also discussed the mentioned references and tested methods on larger datasets. We hope our answers can sufficiently address your concerns, and if that is the case, we kindly ask you to consider increasing the score.
**W1: Too many or repeated citations. References of some important statements are missing.**
1) We have listed some typical methods and changed the sentences to “continuous-time methods like JODIE[27], DyRep[53], TGAT[62], TGN[44], CAWN[58], TCL[55], GraphMixer[12] and others[35, 9, 57, 24, 34]” and “graph convolutions like DyRep[53], TGAT[62], TGN[44] and others[35, 9, 57]”.
2) We clarify the former citations “…for dynamic graphs[18,45,71]” are used to list some libraries, while the latter “DynamicGEM [18]” is cited to discuss a specific one. So they don't seem to repeat.
3) We have changed the presentation to “Unlike most previous methods … hops (e.g., DyRep[53], TGAT[62], TGN[44], CAWN[58])”.
**W2: Unclear training loss and downstream modules.**
For fair comparisons, all models are optimized by supervised binary cross-entropy loss for dynamic link prediction and node classification tasks. In our initial attempt, we also tried to train some baselines with their own unsupervised loss(e.g., negative log-likelihood loss for temporal point process in DyRep), but they didn’t work well. For both tasks, we use MLPs as downstream modules(Lines 254-255, 261-262).
**W3: Some details of the overall procedure need clarification.**
Due to space limit, we elaborate on how to perform padding in a batch and we'll add an algorithm to describe the overall process in the revised paper. For a batch of multiple links, we first obtain sequences of source and destination nodes in the batch (denoted by $S_U$ and $S_V$). Then, we get the maximal sequence length in $S_U$ and $S_V$ (denoted by $l_U^{max}$ and $l_V^{max}$). If $l_U^{max}$ or $l_V^{max}$ cannot be divided by patch size $P$, we’ll increase them until they are divisible. Next, we apply zero-padding (as we use node with index 0 as padding marker) to sequences shorter than $l_U^{max}$ and $l_V^{max}$ in $S_U$ and $S_V$. Finally, we extract encodings for the padded sequences in $S_U$ and $S_V$, divide them into $l_U^{max}/P$ and $l_V^{max}/P$ patches, and combine them as the inputs of Transformer. Note that we'll store the original lengths of source and destination nodes, and use them to derive their final embeddings, see Equation 8.
**W4: Unclear inductive setting.**
A link is treated as inductive if it contains at least one new node. So the inductive setting covers both cases.
**W5: Some related methods are not considered.**
Thanks for providing these works! HTNE and TREND are built on Temporal Point Process(TPP), which can solve the same tasks in our work. But currently, we cannot integrate them into DyGLib since their codes are unavailable. Similar to HTNE and TREND, DyRep is also a TPP-based method that uses deep neural networks to parameterize intensity function in TPP. Our DyGFormer outperforms DyRep in the experiments, which verifies its superiority. We focus on general dynamic graphs while GSNOP is designed for sparse dynamic graphs, which can be viewed as another line of work parallel to ours. These contents will be added to the revised paper.
**W6(Part 1/2): Some details of datasets are missing.**
We gave the URL to download all datasets in README.md file in our codes. But we cannot show it here as NeurIPS 2023 forbids external links in rebuttal. “duration” is the time span of the dataset. For Wikipedia, Reddit, MOOC, LastFM, Enron, Social Evo. and UCI, the numbers of time steps are 152,757, 669,065, 345,600, 1,283,614, 22,632, 565,932 and 58,911. Their time granularities are Unix timestamps. For Flights, Can. Parl., US Legis., UN Trade, UN Trade and Contact, the time steps/time granularity are 122/days, 14/years, 12/congresses, 32/years, 72/years and 8,064/5 minutes.
**W6(Part 2/2): Can the method be scaled up to larger datasets?**
Yes. We have tested DyGFormer on two larger datasets(tgbl-review with 352,637 nodes and 4,873,540 links, tgbl-coin with 638,486 nodes and 22,809,486 links) in [1]. Due to limited time, we use hyperparameters on Wikipedia and the searches of hyperparameters are desired in future. The datasets use Mean Reciprocal Rank(MRR) for dynamic link prediction. We show results on the test set, and blank spaces mean the methods were too low or out of memory on GPU. We find DyGFormer scales well to larger datasets, while some baselines(JODIE, CAWN) fail. It does medium on tgbl-review and best on tgbl-coin, which shows its effectiveness.
||tgbl-review|tgbl-coin|
|---|---|---|
|JODIE|**54.97±0.91**||
|DyRep|36.70±1.30|43.40±3.80|
|TGAT|45.50±2.57|60.88±1.25|
|TGN|53.20±2.00|58.30±5.00|
|CAWN|34.43±1.39||
|EdgeBank|9.99±0.00|58.00±0.00|
|TCL|36.13±0.54|63.95±0.62|
|GraphMixer|53.12±0.22|73.25±1.75|
|DyGFormer|39.10±2.24|**74.88±0.04**|
**W7: Break of double-blind policy.**
Our submitted paper, appendix and codes are anonymized without identifying information. Also, NeurIPS 2023 allows authors to submit anonymized work of preprints. So we obey the double-blind policy.
**Q1: Can we simultaneously feed multiple links into transformer encoder and derive multiple embedding pairs?**
Yes. We use mini-batch training for DyGFormer by applying zero padding on multiple links in a batch. See **W3** for more details.
**Q2: Can the method support inductive inference for nodes without attributes and historical neighbors?**
DyGFormer can handle the latter case (nodes without historical neighbors) and the inductive setting already covers it (when DyGFormer predicts a new node in evaluation data). But if nodes have no attributes either, the task is hard for most existing models, which is an interesting future direction.
[1]Temporal Graph Benchmark for Machine Learning on Temporal Graphs, arXiv, 2023. | Summary: This paper proposes a new dynamic graph learning architecture and implemented it as part of a new unified graph library with extensive experiment results to verify the effectiveness of the proposed algorithms.
The author proposes a new Transformer-based architecture DyGFormer for dynamic graph learning. The architecture is designed to overcome some of the limitations of previous methods by:
- Leveraging a neighbor co-occurrence encoding scheme, which captures the frequency of appearances of each neighbor in the sequences of source and destination nodes. This explicitly explores the correlations between nodes.
- Utilizing a patching technique, which allows splitting each source/destination node’s sequence into multiple patches, enabling the capture of long-term temporal dependencies and reducing the computational complexity to a level that doesn't depend on the input sequence length.
The author also developed a library called DyGLib as a unified library specifically tailored to continuous-time dynamic graph learning. Its key features include:
- Standardized training pipelines to facilitate reproducibility and consistency across different methods.
- Extensible coding interfaces for better adaptability and scalability and integrates with 13 graph datasets and 9 graph algorithms
Strengths: The paper presents an extensive and technically rigorous evaluation. The authors benchmark their model on 13 dynamic graph datasets against nine state-of-the-art dynamic graph learning models, providing thorough evidence for their conclusions. The comprehensive ablation study further supports the effectiveness of the proposed components.
The proposed DyGFormer architecture performs favorably in comparison to other dynamic graph learning models, with only a few exceptions.
DyGFormer handles long-term node interactions efficiently, thanks to the novel integration of neighborhood co-occurrence encoding and the patching technique.
Weaknesses: The paper lacks a comprehensive technical discussion motivating the design of the proposed model. The co-occurrence matrix is said to model node correlations better than other models, but it's not entirely clear why explicit modeling of correlations leads to superior performance. Node proximity and similar neighbors are features that even basic models like the Graph Convolutional Network (GCN) can capture without explicit modeling.
The paper's presentation could be improved. The title, for instance, may overpromise what the paper actually delivers - I was expecting a unified learning algorithm along with a software tool. However, on closer reading, it appears that the main contribution is from DyGFormer, while DyGLib, though useful, doesn't seem to bring much novelty or be absolutely essential to the paper's contributions. The preliminaries section seems unfinished, and the section on DyGLib includes several unsupported claims and lacks novelty.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1) Why does explicitly modeling the neighbor co-occurrence encoding help? What does the model eventually learn in the encoding module? What is the motivation of this function design which computes f(count_in_Su) + f(count_in_Sv) for each neighbor node?
2) The dynamic graph models consider the depth of graph searching but the proposed only considered first-layer interaction. Does it imply that high-level interaction modeling is not needed for dynamic graphs?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the helpful feedback. We have explained the motivation for our neighbor co-occurrence encoding. We have also clarified some presentations and our opinions on high-order interactions for dynamic graph learning. We are glad to answer more if there are any further issues.
**W1: The paper lacks a comprehensive technical discussion …(GCN) can capture without explicit modeling.**
Our neighbor co-occurrence encoding can exploit the correlations between source node $u$ and destination node $v$ by jointly learning from their sequences. However, the GCNs (involved in DyRep, TGAT and TGN) only capture relationships inside $u$/$v$’s own interactions as they compute for $u$/$v$ separately, and fail to explore the correlations between $u$ and $v$. Results of ablation study(orange bar) in Figure 4 show the need of learning correlations between $u$ and $v$. See **Q1** for an analysis of neighbor co-occurrence encoding’s motivation.
**W2: The paper's presentation could be improved …unsupported claims and lacks novelty.**
- **Title**. “New Architecture” means DyGFormer is a new dynamic graph learning architecture with original designs(neighbor co-occurrence encoding and patching technique), which is different from current methods. “Unified Library” means DyGLib is a unified library with standard training pipelines, extensible coding interfaces and comprehensive evaluating protocols, which is needed for dynamic graph learning.
- **Claims of DyGLib**. DyGLib aims to solve different training pipelines(Lines 218-222), various and problematic implementations(Lines 223-229, Section B.3), and insufficient evaluation protocols(Lines 230-237) in previous studies. Empirical findings(Lines 295-300, Section B.4) well support DyGLib’s claims. Thus, DyGLib contributes to reproducible, scalable and credible dynamic graph learning research.
**Q1(Part 1/2): Why does explicitly modeling neighbor co-occurrence encoding help? What does the model learn?**
Our neighbor co-occurrence encoding’s motivation is more common historical neighbors imply more future interactions(Lines 170-171). To analyze its effect, we compare DyGFormer and DyGFormer without Neighbor Co-occurrence Encoding(i.e., DyGFormer w/o NCoE). We first give some symbols. $TP$, $TN$, $FN$ and $FP$ are True/False Positive/Negative. We focus on links whose predictions of DyGFormer w/o NCoE are changed by DyGFormer($FN$→$TP$, $FP$→$TN$, $TP$→$FN$ and $TN$→$FP$) and define Changed Link Ratio($CLR$) as the ratio of the changed links to their original set, which is computed by $|FN$→$TP|/|FN|$, $|FP$→$TN|/|FP|$, $|TP$→$FN|/|TP|$ and $|TN$→$FP|/|TN|$. Common Neighbor Ratio($CNR$) is the ratio of common neighbors in source node $u$’s sequence $S_u$ and destination node $v$’s sequence $S_v$, i.e., $|S_u \cap S_v|/|S_u \cup S_v|$. If our encoding is helpful, DyGFormer will fix wrong predictions of DyGFormer w/o NCoE(more $FN$→$TP$,$FP$→$TN$) and make fewer incorrect changes(fewer $TP$→$FN$,$TN$→$FP$). We compute $CLR$ and averaged $CNR$ of links in the above sets, and show results in $CLR$($CNR$) format on five typical datasets due to space limit.
||$FN$→$TP$|$FP$→$TN$|$TP$→$FN$|$TN$→$FP$|
|---|---|---|---|---|
|Wikipedia|68.36(18.16)|72.73(0.01)|1.68(0.10)|1.69(2.49)|
|Flights|83.66(37.09)|83.83(2.28)|1.73(7.06)|2.11(20.28)|
|UCI|71.45(19.08)|94.11(2.49)|7.29(3.35)|1.82(13.02)|
|US Legis.|31.63(69.92)|23.67(62.13)|6.63(61.14)|1.59(63.80)|
|UN Vote|44.02(78.57)|36.46(81.39)|28.95(80.86)|30.53(77.02)|
We find **our neighbor co-occurrence encoding helps DyGFormer correct wrong predictions of DyGFormer w/o NCoE on datasets with several times higher $CNR$ of positive links than negative ones (this occurs on most datasets).** For Wikipedia, Flights and UCI, their $CNR$s of $FN$ are much higher than $FP$(37.09% vs. 2.28% on Flights) and DyGFormer corrects most wrong precitions of DyGFormer w/o NCoE(83.66% for positive links $FN$→$TP$, 83.83% for negative links $FP$→$TN$ on Flights). **Corrections made by our encoding are less obvious when $CNR$s between positive and negative links are similar, which only occur in 2 out of 13 datasets.** For US Legis. and UN Vote, their $CNR$s between $FN$ and $FP$ are analogous(69.92% vs. 62.13% on US Legis.), weakening our encoding’s advantage(only 31.63%/23.67% of positive/negative links are corrected in $FN$→$TP$/$FP$→$TN$ on US Legis.).
Thus, we conclude neighbor co-occurrence encoding helps DyGFormer capture common historical neighbors in $S_u$ and $S_v$, and bring better results in most cases. We further compare DyGFormer with DyGFormer w/ SepNO variant that learns from $S_u$ and $S_v$ separately(orange bar in Figure 4) and get similar observations, which supports our motivation again.
**Q1(Part 2/2): What’s the motivation of function f(count_in_Su) + f(count_in_Sv) for each neighbor node?**
This function is designed to encode the co-occurrence features of each neighbor by combining the appearing frequencies in $S_u$ and $S_v$. Take source node $u$ as an example. Its neighbor co-occurrence features is $C_u^t\in\mathbb{R}^{|S_u^t|\times 2}$, where $C_u^t[:,0]$ is the appearing frequency of nodes $u^\prime\in S_u$ in $S_u$ and $C_u^t[:,1]$ is the frequency of nodes $u^\prime\in S_u$ in $S_v$. We first apply $f()$ on $C_*^t[:,0]$ and $C_*^t[:,1]$, and then add them to get the encoded co-occurrence features.
**Q2: The dynamic graph models … Does it imply high-level interaction modeling is not needed for dynamic graphs?**
Our experiments show that though DyGFormer only learns from first-hop interactions, it usually beats baselines that use high-order interactions(DyRep, TGAT, TGN…). This indicates dynamic graph learning can be well down just with first-hop interactions when we use effective designs. But we cannot say high-order interactions are not needed. Instead, it is better to infer the benefits of using high-order information in baselines are not obvious and better designs are desired to utilize such information.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed answers. Really appreciated your efforts on providing additional experiment results. My biggest concern of the paper which is about the weak theorical discussion and analysis is still not addressed adequately. However, given the strong empirical performance, I will revise the score from 3 to 4.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer eUNf
Comment: Thanks for your valuable feedback. We understand your concern about the weak theoretical analysis of our model, which is a common challenge in the dynamic graph learning field. We have tried our best to provide an empirical discussion of our approach in the rebuttal stage, which may pave the way for future theoretical analysis. We thank you for raising your score according to our strong performance. | Summary: This paper propose DyGFormer, a new Transformer-based architecture for dynamic graph learning, whose novelty mainly includes a neighbor co-occurrence encoding scheme and a patching technique. Moreover, it introduce DyGLib, a unified library to promote reproducible, scalable, and credible dynamic graph learning research. In experiments ,the DyGFormer achieves sota performance on most of the datasets.
Strengths: 1. The paper is clearly written and easy to follow.
2. I think this unified library is high-quality, meaningful and urgently needed, which can well promote the development of dynamic graphs
Weaknesses: 1. In DyGFormer , I think the introduction of transformer is not so novel. which has similarity with the attention mechanism of TGAT and TGN.
2. PINT[1], a work in NeurIPS 2022, is a new and sota model in the area of dynamic graph. Why this paper has not mention it?
[1] Souza A, Mesquita D, Kaski S, et al. Provably expressive temporal graph networks[J]. Advances in Neural Information Processing Systems, 2022, 35: 32257-32269.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you add the introduction and experiment of PINT in this paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors adequately addressed the limitations. I have no ethical concerns with this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. Following the reviews, we have emphasized the novelty of DyGFormer in the designs of the neighbor co-occurrence encoding scheme and patching technique. We have also added introductions and empirical comparisons with the PINT baseline. We hope our answers have sufficiently addressed your questions and we are glad to provide more answers if there are any further issues.
**W1: In DyGFormer, I think the introduction of transformer is not so novel, which has similarity with the attention mechanism of TGAT and TGN.**
Please kindly note that Transformer is the backbone of our DyGFormer, but it is not the essential part of our technical contributions. Instead, the novelties of DyGFormer come from the original designs of the neighbor co-occurrence encoding scheme and patching technique.
- The neighbor co-occurrence encoding scheme explores the correlations of the source node and destination node based on their historical sequences. This point is less investigated in the existing dynamic graph learning methods because they separately compute the temporal representations of the source and destination node, and thus fail to model their correlations (e.g., JODIE, DyRep, TGAT, TGN, CAWN, …).
- The patching technique helps DyGFormer to effectively and efficiently leverage longer histories, which is also not well addressed by previous methods.
Extensive experimental results demonstrate the effectiveness of these two designs. Last but not least, we want to clarify that compared with the mentioned TGAT and TGN, our DyGFormer only needs to learn from the sequences of historical first-hop interactions of nodes, which is conceptually simple but effective. Therefore, we conclude that DyGFormer differs from existing dynamic graph learning methods with original technical contributions.
**W2&Q1: PINT[1], a work in NeurIPS 2022, is a new and sota model in the area of dynamic graph learning. Why this paper has not mentioned it? Can you add the introduction and experiment of PINT in this paper?**
Thanks for providing this insightful work and we apologize for not mentioning this reference. PINT [1] studies the representational power and limits of two categories of temporal graph networks, and proposes a novel architecture that is provably more expressive than both categories. Here, we provide experimental comparisons between our DyGFormer and PINT. Due to limited time during the rebuttal period, we directly compare with the official results of average precision reported in the PINT paper [1]. Please note that the comparisons are fair because the statistics and preprocessing of datasets in PINT are identical to our work.
| | | Wikipedia | Reddit | LastFM | Enron | UCI |
| --- | --- | --- | --- | --- | --- | --- |
| transductive setting | PINT | 98.78 $\pm$ 0.10 | 99.03 $\pm$ 0.01 | 88.06 $\pm$ 0.70 | 88.71 $\pm$ 1.30 | **96.01 $\pm$ 0.10** |
| | DyGFormer | **99.03 $\pm$ 0.02** | **99.22 $\pm$ 0.01** | **93.00 $\pm$ 0.12** | **92.47 $\pm$ 0.12** | 95.79 $\pm$ 0.17 |
| inductive setting | PINT | 98.38 $\pm$ 0.04 | 98.25 $\pm$ 0.04 | 91.76 $\pm$ 0.70 | 81.05 $\pm$ 2.40 | 93.97 $\pm$ 0.10 |
| | DyGFormer | **98.59 $\pm$ 0.03** | **98.84 $\pm$ 0.02** | **94.23 $\pm$ 0.09** | **89.76 $\pm$ 0.34** | **94.54 $\pm$ 0.12** |
From the results, we find that DyGFormer performs better than PINT in all the cases except for the transductive setting on UCI, which demonstrates the superiority of DyGFormer. We will include the above introductions and experiments of PINT in the revised version of our paper.
[1] Provably expressive temporal graph networks, NeurIPS, 2022.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks for your efforts to improve the paper, which have solved most of my concerns. I would like to keep the score as is.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer QwYr
Comment: Thanks a lot for the response! We really appreciate your support for our work. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their valuable feedback and helpful comments on our work. We are delighted by the reviewers’ acknowledgments that the proposed DyGFormer is novel with original designs (Reviewers Ea5H, eUNf, ErNY, and zX7A), the presented DyGLib is of high quality and advances the dynamic graph learning field (Reviewers Ea5H, QwYr, and zX7A), the experimental evaluations are rigorous and extensive (Reviewers Ea5H, eUNf, ErNY, and zX7A), and the paper is well-organized and easy to follow (Reviewers Ea5H, QwYr, ErNY, and zX7A).
To the best of our efforts, we have provided thorough responses to address the concerns raised by each reviewer. In particular, the responses mainly consist of:
- Analysis of why the performance of DyGFormer varies across different datasets and various negative sampling strategies;
- Additional introductions and empirical comparisons with the PINT baseline;
- Explanations of the motivation of the neighbor co-occurrence encoding scheme;
- Discussions of several related dynamic graph learning methods based on temporal point process and neural ordinary differential equation;
- Additional evaluations of DyGFormer and baselines on larger datasets;
- Specifications of some details in experimental settings;
- Elaborations of certain experimental analysis.
In summary, all the comments are very valuable for us to improve the quality of this work and we will incorporate them in the revised version of our paper. Once again, we thank the reviewers for their constructive feedback. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a transformer-based architecture (DyGFormer) for dynamic graph learning, based on a node co-occurrence encoding scheme and patching. Further, they present DyGLib a library for uniform evaluation of dynamic graph learning techniques. Extensive experimental evaluations over diverse datasets show that DyGFormer performs well.
Strengths: **Originality.** While the transformer architecture is well-known, it's application to dynamic graph learning, and the proposed co-occurrence encoding and patching schemes are novel. Various libraries / frameworks (as discussed in 108) exist for dynamic graph learning, so DyGLib is not novel in that regard, but a fresh rigorous and extensible evaluation is appreciated.
**Quality / Clarity.** The paper is well-written and easy to follow. The DyGLib codebase is high-quality and well-documented. At a glance, it looks easy to use. Also, the appendix is thorough and well-put-together.
**Significance.** Dynamic graph learning is an important research problem. This work not only presents a good solution but also paves the way for rigorous future work.
Weaknesses: While DyGFormer outperforms the baselines in avg. rank, there is a problematic trend that on some datasets DyGFormer can be much worse than the best or second best baseline in terms of absolute performance points (Table 1). This needs some analysis. What characteristics of these datasets make DyGFormer a bad choice against the baselines? What aspects of DyGFormer and the baselines could be causing their poor and better performance respectively? Why does DyGFormer's superiority falter when going from "rnd" to "hist" to "ind"?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Is it possible to integrate DyGLib / DyGFormer into TGL ([71] in paper)? If so, that would prevent a fragmentation of frameworks, and also allow dynamic graph learning research on billion-scale graphs.
Also, see the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Limitations are discussed but in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the helpful reviews. We have analyzed how the properties of datasets affect DyGFormer and explained the reasons for its varying performance under various negative sampling strategies. We have also discussed the possibility of integrating DyGLib and TGL. We hope our answers have well addressed your concerns.
**W1(Part 1/2): What characteristics of datasets make DyGFormer a bad choice against baselines? What aspects of DyGFormer and baselines cause their poor and better performance?**
DyGFormer is superior to baselines by 1) exploring the source and destination nodes’ correlations from their historical sequences by neighbor co-occurrence encoding, which assumes more common historical neighbors imply more future interactions(Lines 170-171); 2) using the patching technique to attend longer histories. Thus, DyGFormer tends to perform better on datasets that favor these two designs.
For analysis, we give some symbols first. $TP$, $TN$, $FN$ and $FP$ are True/False Positive/Negative. Link Ratio($LR$) is the ratio of links in its corresponding set, which is computed by $TP/(TP+FN)$, $TN/(TN+FP)$, $FN/(TP+FN)$ and $FP/(TN+FP)$. Common Neighbor Ratio($CNR$) is the ratio of common neighbors in source node $u$’s sequence $S_u$ and destination node $v$’s sequence $S_v$, i.e., $|S_u \cap S_v|/|S_u \cup S_v|$. As a method with more $TP$ and $TN$(fewer $FN$ and $FP$) is better, we compute $LR$ and averaged $CNR$ of links in $TP$ and $TN$. We show results in the $LR$($CNR$) format on five typical datasets due to space limit.
||$TP$|$TN$|
|---|---|---|
|Wikipedia|92.74(59.09)|97.19(0.01)|
|Flights|96.13(47.58)|95.33(1.40)|
|UCI|82.70(28.03)|96.77(1.45)|
|US Legis.|78.95(75.18)|56.83(53.98)|
|UN Vote|65.18(56.24)|45.43(76.02)|
We observe **when** $CNR$ **of** $TP$ **is several times higher than** $CNR$ **of $TN$ in the datasets, DyGFormer often outperforms baselines (most datasets have this property).** For Wikipedia, Flights and UCI, their $CNR$s of $TP$ are much higher than those of $TN$(e.g., 59.09% vs. 0.01% on Wikipedia). This property matches the neighbor co-occurrence encoding’s assumption, enabling DyGFormer to correctly predict most links (92.74% of positive links and 97.19% of negative links on Wikipedia). Moreover, as LastFM and Can. Parl. can gain from longer histories (see Figure 3, Table 7), DyGFormer is significantly better than baselines on them. **When $CNR$s of $TP$ and $TN$ are less distinguishable in the datasets, DyGFormer may do worse (only 2 out of 13 datasets show this property).** For US Legis., the $CNR$ between $TP$ and $TN$ is close (75.18% vs. 53.98%), which makes DyGFormer worse than memory-based baselines (JODIE, DyRep, and TGN). For UN Vote, its $CNR$ of $TP$ is even lower than that of $TN$(56.24% vs. 76.02%), which is opposite to our assumption, making DyGFormer perform worse than a few baselines. Since these two datasets don’t obviously gain from longer sequences either (see Table 7), DyGFormer cannot achieve satisfactory results on them.
**W1(Part 2/2): Why does DyGFormer's superiority falter when going from "rnd" to "hist" to "ind"?**
Compared with rnd, hist and ind will sample previous links as negative ones. This makes previous positive links negative, which tends to hurt DyGFormer since the assumption of neighbor co-occurrence encoding scheme may be violated. As positive links are the same among rnd, hist, and ind, we compute $LR$ and averaged $CNR$ of links in $FP$ and show results in the $LR$($CNR$) format.
|||$FP$|
|---|---|---|
|Wikipedia|rnd|2.81(0.02)|
||hist|89.28(14.00)|
||ind|94.53(11.66)|
|Flights|rnd|4.67(0.01)|
||hist|94.52(35.62)|
||ind|92.94(30.29)|
|UCI|rnd|3.23(9.98)|
||hist|64.93(12.22)|
||ind|76.42(13.81)|
|US Legis.|rnd|43.17(79.40)|
||hist|17.31(87.51)|
||ind|21.21(75.51)|
|UN Vote|rnd|54.57(79.60)|
||hist|39.90(75.53)|
||ind|52.92(79.15)|
We find **when hist or ind causes several times higher** $CNR$ **of** $FP$ **than rnd in the datasets, DyGFormer drops sharply.** For Wikipedia, Flights and UCI, the $CNR$s of $FP$ with hist/ind are much higher than rnd (14.00%/11.66% vs. 0.02% on Wikipedia). This misleads DyGFormer to predict negative links as positive and causes drops (89.28%/94.53% of negative links are incorrectly predicted with hist/ind on Wikipedia, while only 2.81% are wrong with rnd). We also note the drops on UCI are milder since the changes in $CNR$ caused by hist or ind vs. rnd are less obvious than changes in Wikipedia or Flights. **When changes in** $CNR$ **of** $FP$ **caused by hist or ind are not obvious in the datasets, DyGFormer is less affected.** As hist/ind makes little changes in $CNR$s of $FP$ on US Legis., we find it ranks second with hist/ind, which may indicate DyGFormer is less influenced by neighbor co-occurrence encoding on US Legis. and generalizes well to various negative sampling strategies. For UN Vote, though its $CNR$s of $FP$ are not affected by hist and ind either, DyGFormer still performs badly due to its inferior performance with rnd.
To this end, we conclude that for datasets with much higher $CNR$ of positive links than negative ones or datasets that can benefit from longer histories, DyGFormer is a good choice. Otherwise, we may need to try other methods. Also, our neighbor co-occurrence encoding may be sometimes fragile to various negative sampling strategies if its assumption is violated. It’s promising to tackle this as future work.
**Q1: Is it possible to integrate DyGLib/DyGFormer into TGL?**
Yes. Technically, TGL trains on large-scale dynamic graphs with some engineering implementations (e.g., realizing temporal neighbor sampler by C++ for efficient sampling, using multiple GPUs for parallel training). Our DyGLib contains more popular dynamic graph learning models and supports comprehensive evaluations. Hence, DyGLib and TGL are complementary and basically compatible as most of their modules are developed by PyTorch. It is promising to combine them for a better framework.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. I would like to keep my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Ea5H
Comment: Thanks for your response! Your support for our work means a lot to us. | null | null | null | null | null | null |
State2Explanation: Concept-Based Explanations to Benefit Agent Learning and User Understanding | Accept (poster) | Summary: The paper proposes State2Explanation, a framework for training RL agents in such a way that both the human and agent benefit, the Protégé Effect as the authors state. The basic idea is to learn a joint embedding space with "temporal" concepts that actively helps the agent train better by shaping their rewards. At test time, these explanations are provided to humans to also help train them.
Strengths: A strength of the paper is I feel the motivation. Explainability in RL is harder than supervised learning due to the temporal added component, so approaching this problem is well motivated.
In addition, the interplay between agent and human is also very exciting, I love the idea of the two helping and playing off each other. I feel this is a great way to impose human understandable concepts onto an RL agent's policy.
I also appreciate the rigour in the user study (although I need some clarifications in the rebuttal), it's clear to me the authors tried here.
Weaknesses: What I always look for in a paper is a single nice idea, and I am not certain this paper really contains one, let me explain by iterating your claimed contributions.
Regarding the desiderata: (1) the idea that a 'concept should be grounded in human domain knowledge' is not a new idea I feel. The original T-CAV paper noted this by e.g. using zebra stripes to explain zebra classifications, rather than e.g. logit values. Simple saying the same thing is true in a sequential setting doesn't seem that original to me. I'm not aware of a great many people suggesting q-values are a good explanation. (2) The idea that a concept should relate to the task goal is an interesting one, and could be useful in certain context, but I feel it doesn't generalize well. For example, if I'm in a self-driving car going from a => b, and halfway through the journey the car brakes to avoid hitting a pedestrian, your framework seems to posit the explanation should be "I avoiding killing the human so I could still get to point b". I feel that's a questionable belief system here we are teaching the agent. (3) is a nice idea, but I don't feel it's particularly surprising, the idea that explanations should be robust and generalisable is well agreed upon.
The framework you propose does seem quite interesting to me, but reward feedback loops are not my area of expertise, so I defer to my colleges to evaluate the novelty of such a framework.
Section 5.2: I am slightly concerned that the explanations are not always matching in Figure 3, does that mean that this method will inevitably give "wrong" explanations? Also, I don't know what i, j, and k are in the figure? Now I personally believe it matters more that it just helps people in a user study, which you show it does, so that's great and I feel makes up for this.
Section 5.3: If I understand this correctly, your method doesn't seem to show much difference here at all, and in fact Fig 4a seems to show the baseline outperforming it?
User Evaluation: I appreciate the attention to detail here, but a lot of details are missing or (I feel) badly explained. I don't really understand how people's performance could get worse in group $E_{A}$, all they are being shown is the action as an explanation, so the idea their ATS would get worse seems extremely odd to me, and makes me feel I am misunderstanding something here. I tried to explore the appendix to understand what was happening, but didn't have much luck.
I think there's some citations that you might also like to be aware of regarding human friendly concepts in sequential settings.
Ji, Ying, Yu Wang, and Jien Kato. "Spatial-temporal Concept based Explanation of 3D ConvNets." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Kenny, Eoin M., Mycal Tucker, and Julie Shah. "Towards interpretable deep reinforcement learning with human-friendly prototypes." The Eleventh International Conference on Learning Representations. 2023.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. During the training there's no feedback from the human right? If I understand, the human defines the concepts, and then these help the agent train better? (but Fig 4 seems to show this isn't the case?)
2. Whilst you used a CNN in connect 4, does this scale to harder Deep RL problems?
3. Is my understanding that Fig 3a shows the explanations are often "wrong" accurate?
4. What is i, j, and k in Fig 1?
5. How did you pick a sample size in the user study? Although it seems fairly rigorous, I don't see any power analysis, and you aren't clear what your attention checks are? It also seems strange that you had 66->60 and 98->90 in the two domains, a perfect balance for 15 in each group.
I might recommend using Prolific instead, it seems to have less issues compared to MTurk which is full of bots and LLMs.
I am happy to revise my review after the rebuttal.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: This is ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer GSRV, we thank you for your valuable feedback. Below we provide responses to weaknesses and questions:
**W: single idea missing..**
We disagree with this comment. To our knowledge, our framework S2E is the first unified framework that considers how concept-based explanations can provide a dual benefit to the RL agent and user, inspired by the Protege Effect. Specifically, S2E leverages a joint embedding model to retrieve concept-based explanations that both inform reward shaping to aid agent learning during training time, as well as improve user task performance at deployment time. Our work is important since S2E goes beyond showing the utility of explaining an agent’s behavior for user understanding and provides a joint benefit to the agent as well, all within a single framework.
**W: desiderata..**
Prior work in concept-based explanations for sequential decision making have had limited definitions of concepts using state preconditions, action costs or control logic [22,46 in paper]. However, our work posits that concepts in sequential decision making should be defined by higher-level properties as outlined by our desiderata. Our desiderata, inspired by concept-based explanations in classification tasks (like T-CAV), is a first step to providing a generalized definition of concepts in sequential decision making. We don’t claim that our desiderata is a complete set and will clarify this in Sec. 3. Future work includes extending our desiderata, including your valuable point about considering environmental uncertainties for defining concepts.
**W: Sec. 5.2** + **Q3:**
Fig. 3 shows the accuracy of the joint embedding models’ ability to correctly retrieve concept based explanations related to state-action pairs. Given the recall rates are not 100%, incorrect explanations are retrieved a fraction of the times as shown in Fig 3. However even with imperfect joint embedding models in S2E, results in Sec. 5 and Sec. 6 show S2E’s utility in a dual-benefit to agent learning rate and user task performance (see weakness response).
**W: Sec. 5.3**
In Fig. 4a, the current SoTA baseline agent that exists for Connect 4 is an agent not trained with reward shaping–referred to as “MuZero + No-RS”. Fig 4a shows that even with an imperfect joint embedding model, S2E is able to inform reward shaping and improve the agent’s learning rate compared to “MuZero + No-RS” in Connect 4. As mentioned in Sec. 5.3, our S2E informed reward shaping (“MuZero + S2E-RS”) improves the agent’s learning rate by 200 training steps compared to “MuZero + No-RS”. The “MuZero + E-RS” curve in Fig 4a shows the upper bound performance of S2E if our Connect 4’s joint embedding model was perfectly trained. We expect the small gap seen between “MuZero + E-RS” and “MuZero + S2E-RS”, given from Fig. 3a that Connect 4’s joint embedding model is not 100% accurate. Similarly, in Fig. 4b and 4c we see that in Lunar Lander, our MuZero + S2E-RS can inform reward shaping compared to the existing soTA dense reward shaping (“MuZero + E-RS”). We will clarify this in Sec. 5.3.
**W: User Eval..**
This was an interesting finding; however, not a weakness of our method. The downward trend with “Action-Based” Explanations demonstrates possible negative effects of having to learn from only expert actions as feedback. We hypothesize that users provided with only expert actions caused users to project an incorrect reasoning to the experts’ actions which in return confused their understanding and led to worsened performance. We will discuss this finding in the appendix.
**W: Citations** -- We appreciate you pointing out these; we will include in our related works.
**Q1:**
That is correct, during the RL agent’s training time there is no feedback from the human. To clarify, concepts are defined by domain experts and S2E learns a joint embedding model that maps concept based explanations to state-action pairs. The learned joint embedding model is then used to inform reward shaping during RL agent training and produce concept-based explanations at deployment to improve user task performance. See our response related to the weakness around Sec. 5.3 that our S2E can inform reward shaping and thereby improve agent learning rates compared to no-reward shaping (Connect 4), and provide comparable learning rates to existing dense-reward functions (Lunar Lander).
**Q2:**
CNN’s have been leveraged in existing Deep RL architectures, including the SoTA MuZero RL model architecture. We do not claim our exact CNN architecture will apply to all Deep RL problems, and modifications (such as additional layers in the joint embedding model) may be necessary for harder RL problems. Developing a single joint embedding model architecture generalizable across multiple complex RL domains was beyond the scope of this work and is mentioned in our limitations as future work.
**Q4:**:
The “i”, “j” and “k” in Fig. 1 describe three different concept-based explanation embeddings that exist in the learned joint embedding space. As noted in Sec. 4.3, the best ranked explanation embedding is retrieved from the learned joint embedding space similar to image-to-text-retrieval. We will qualify “i”, “j” and “k” in the Fig. 1 caption.
**Q5:**
It is good practice to have an equal number of study participants per study conditions to conduct fair statistical analysis comparisons. After filtering participants, we continued to recruit participants to ensure we had 15 participants per condition. We filtered participants by analyzing visualizations of the participants’ games. We determined a participant to show no effort if they only played a single action during the entire course of any of their pretest or posttest games. We did not conduct a power analysis in this particular study given that power analyses require making estimations about each study conditions’ mean or variance which we did not deem appropriate to assume. We will add these details to Appendix E.3.
---
Rebuttal Comment 1.1:
Title: Response
Comment: W: single idea missing..
No problem, as I say that’s not my area of expertise, so I defer to the other reviewers to rate that aspect.
***
W: desiderata..
I understand, I agree with the sentiment of the desiderata, I just feel that it’s not really a significant contribution by itself, although a nice “aside” in the paper, something which is “backed up” by the subsequent experimental evidence.
***
W: Sec. 5.2 + Q3:
Ok I understand, thanks for the clarification here. Again, I don’t feel it’s a huge issue if explanations are occasionally wrong, it really depends on the downstream application we have in mind. E.g., on a Mars land rover I do feel they have to give certain guarantees, but if you’re just using the to improve task performance in another less sensitive way, I could imagine that’s ok. No pressure, but it could be worth mentioning.
***
W: Sec. 5.3
Ok thanks, for what it’s worth I personally struggled a bit to understand this, so it probably would be worth cleaning up the writing a bit.
***
W: User Eval..
If you could explain this in even more detail I would appreciate it, I still just really don't understand how this could happen. Even the experimental design?
***
Q1:
Thank you, and just to clarify, these explanations are for actions right? Not a series of actions? That is, at each action taken, the user must interpret a separate explanation? Rather than say, one explanation for the next 20-40 actions?
***
Q2:
Ok
***
Q4::
Ok thanks.
***
Q5:
It is good practice to have an equal number of study participants per study conditions to conduct fair statistical analysis comparisons.
100% agree.
After filtering participants, we continued to recruit participants to ensure we had 15 participants per condition.
But why 15? Why not 10? Why not 50? My concern is that it is well known you can “p-hack” your way to significance by simply increasing your sample size, that’s what a power analysis is for.
We filtered participants by analyzing visualizations of the participants’ games. We determined a participant to show no effort if they only played a single action during the entire course of any of their pretest or posttest games.
Was there a large amount that played e.g. 2 actions per game? A single action seems a fairly arbitrary cutoff.
We did not conduct a power analysis in this particular study given that power analyses require making estimations about each study conditions’ mean or variance which we did not deem appropriate to assume. We will add these details to Appendix E.3.
Ok. Usually you would observe similar studies done prior and base your assumptions on that. I know it is difficult, but user studies are not easy to do well.
I’d appreciate if you could reply to the remaining concerns, thank you!
---
Reply to Comment 1.1.1:
Title: Response to GSRV - Part 1
Comment: Thank you for your follow up. We respond to your comments and concerns below. I've split my responses into two parts. This is part 1.
**“Desiderata + single idea”:** We also agree that our main contribution is the S2E framework. The desiderata is also a contribution given that we are expanding the existing definitions of concept-based explanations in sequential decision making problems.
**“Sec 5.2 + Q2”:** We agree and will add such details in the limitations section.
**“Sec 5.3”:** We will improve the clarity of the current text in Sec. 5.3.
**Q: “these explanations are for actions right...**
In both user studies, during the “Explanation Stage”, the user sees an explanation per action taken by the RL agent in all study conditions. In the case of the concept based explanation with temporal grouping (TeG) condition, some explanations are grouped across identical sequential actions (see Sec. 4.3 and 6.2). However, the user still sees a single explanation per action, they are just made aware that such explanation holds true for “N” steps. For example, if the lunar lander agent “fires the right engine” 5 times consecutively. A concept-based explanation w/TeG would be: “Fire right engine for the next 5 steps to decrease agent tilt”, and the user would see such an explanation across the 5 steps. In comparison, a concept-based explanation without TeG would be “Fire right engine in this step to decrease agent tilt”, and the user would see such an explanation for each of the 5 steps.
---
Reply to Comment 1.1.2:
Title: Response to GSRV - Part 2
Comment: Thank you for your follow up. We respond to your comments and concerns below. I've split my responses into two parts. This is part 2.
**User Eval: If you could explain this in even more detail..**
In our previous response, we provide one hypothesis on the phenomenon of decreased user performance when exposed to “action-based” or “value-based” explanations. That is, these explanations may cause end-users to project wrong understandings to agent actions, and therefore their performances decrease in the post-tests. To study exactly why such decrease occurs requires performing more, in-depth qualitative user analyses such as conducting semi-formal interviews after the user studies to probe participants more about their experience and understand where such decreases may be coming from.
**Experimental design:**
We detail our user study in Appendix E.1 and provide visuals of the study in Appendix E.4. The user study consists of four stages in order: “practice” stage, “pre-test stage”, “explanation stage” and “post-test” stage. Prior to the “practice stage”, participants are introduced to the task, and the game they will be playing. After the “post-test stage”, the user also answers a few survey questions in a questionnaire.
The “practice stage” allows users to play 2 practice games–participants are told these are unscored games, and they are meant as practice to get users further familiarized with the game. The “pre-test stage” consists of playing 3 scored games. Users are told these games are an assessment and are scored. The “explanation stage” consists of the user interacting with a well-trained agent as the agent plays the game. The user is able to click through every action the agent plays in the game and depending on the study condition, the user sees an accompanied explanation type for each action played. In Connect 4, the user watches a Connect 4 RL agent play 3 games. In Lunar Lander, the user watches a Lunar Lander RL agent play 1 game. The “post-test stage” consists of playing 3 more scored games.
As our analysis, we study how explanations provided in the “explanation stage” may help improve the user’s task performance. Therefore our metric is ATS, adjusted task score, is computed as a difference between the aggregated “post-test” task scores and the aggregated “pre-test” task scores. The single game’s task score is computed as an aggregation of user rewards received during the game (see Sec. 6).
**Q5: But why 15?...sample size.,.power analysis"**
The best way to address your concerns about our sample size per study condition having enough power is by demonstrating a priori power analysis as well as post-hoc power analysis. Please be aware that priori power analyses, as I mentioned earlier require setting assumptions. We will set the minimal assumption of effect size since we believe estimating group means are unrealistic when conducting such study for the first time. All power analyses are conducted in R using pwr.anova.test (prior power analysis) and power.anova.test (post-hoc power analyses). We will include these results in the appendix.
The priori power analyses assumes an estimated power of 0.8 and estimated effect size of 0.4. This effect size is moderate, considering Cohen’s guidelines for effect size [1,2]. With these assumptions, we find N=15.89 for Connect 4 and N=13.09 for Lunar Lander. Our current study has N=15 for both studies, which falls within these estimated N values. We also perform a post-hoc power analysis given after performing the user studies we have the actual group means per study condition. The post-hoc power analyses show that our N=15 has a power of 0.99 and in Lunar Lander our N=15 has a power of 0.96. Thus, our statistical analyses reported have high statistical power. Overall, the above analyses show that N=15 is a reasonable number of participants to conduct for our user studies.
1. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillside, NJ: Lawrence Erlbaum Associates.
2. Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159. doi:10.1037/0033-2909.112.1.155
**Q5: A single action seems a fairly arbitrary cutoff...**
Playing a single action was an obvious tell-tale that a participant was not performing the study with effort and was trying to finish as fast as possible. We visualized every participant’s actions in all games. There were only two distinctions we could fairly make. Participants who pressed a single button (single action) until the end of most games, and participants who did not press a single button and instead played actions that were more varied, showing effort. Trying to determine a more nuanced cutoff for filtering did not seem appropriate and could induce additional biases. | Summary: Inspired by the Protege effect, learning and developing explanations should provide a dual benefit, both to the readers of the explanations, and to the developers of the explanations. Based on this idea, the paper proposes State2Explanation, an algorithm to learn joint embeddings between state-action pairs and concept-based explanations. This allows for reward shaping, which benefits the explanation developer, while also providing explanations. These claims are validated for agents in two different reinforcement learning settings: Connect 4 and Lunar Lander.
Strengths: 1. Provides concrete desiderata for a concept is, including the need for generalizibility and its relationship to the task goal
2. Reward shaping through explanations is potentially novel, and explores use cases of explanations beyond explainability
3. Thorough evaluation investigates the impact of various additions to the model, making it clear what the impact of Information Filtering (InF) and Temporal Group (TeG) are
4. User study demonstrates real-world viability model through a human explainability lense
Weaknesses: 1. Concepts for domains are dependent on expert knowledge for state-action pairs, making it unclear how easy it would be to generalize beyond well-studied games. This is especially the case when state-actions pairs tend to get very large.
2. An additional baseline would make it clearer what the impact of each study condition was in Section 6. In particular, a baseline with no intervention/information would make it clear how much the rise in performance is due to the additional practice attained from the PreTest when completing the PostTest.
3. If concepts for Lunar Lander are derived from the existing domain reward function, then claiming that $M_{LL}$ "informs reward shaping comparable to expert-defined dense reward functions" seems to follow from the definition of the concepts rather than an indicator of the performance of the algorithm.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. For reward shaping, how is the amount of reward determined/amount the reward function is changed by?
2. In general, who would annotate the explanations, and what is the impact if there are multiple explanations for a single move/conflicting annotations?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The paper discusses limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer g9Rv, we thank you for your valuable feedback. Below we provide responses to weaknesses and questions:
**W1: “Concepts for domains are dependent on expert knowledge for state-action pairs, making it unclear how easy it would be to generalize beyond well-studied games. This is especially the case when state-actions pairs tend to get very large.”**
Thank you for highlighting this point. It is true that concepts related to state-action pairs should be derived from domain experts to ensure accurate concept representations. To scale to large state-action pair scenarios where collecting large amounts of expert annotated data may be unfeasible, our joint embedding model in S2E can be optimized for few shot learning. Note, a strength of leveraging a joint embedding model in our S2E is that our trained joint embedding model can also be used to retrieve accurate concepts-based explanations for unannotated state-action pairs. In this manner, when considering scenarios with large state-action pairs, we can leverage our joint embedding model to provide annotations as well. We will include this in the discussion section.
**W2: “An additional baseline would make it clearer what the impact of each study condition was in Section 6. ….a baseline with no intervention/information would make it clear how much the rise in performance is due to the additional practice attained from the PreTest when completing the PostTest.”**
It is true that an additional baseline where there is no information provided in the “Explanation stage” could have offered an additional layer of analyses when compared to our current study conditions.
Importantly, we’d like to highlight that such additional baseline is **not necessary** to show “how much the rise in performance is due to the additional practice attained from the PreTest when completing the PostTest.” In our experimental analysis, we do currently take into consideration any learning effects obtained via the “PreTest” since our ATS metric measures the *difference* between the Pre-Test and Post-Test scores. That is, we account for participants’ initial performance on the “PreTest” and measure whether the intervention/explanations provided in the “Explanation stage” help to provide any *improvements* to their “Post Test” scores.
**W3: “If concepts for Lunar Lander are derived from the existing domain reward function, then claiming that MLL "informs reward shaping comparable to expert-defined dense reward functions" seems to follow from the definition of the concepts rather than an indicator of the performance of the algorithm.”**
We disagree with this comment. Concepts derived from an existing expert-defined reward function allow us to directly compare how well a joint embedding model trained using these concepts (M_{LL}) can inform reward shaping when compared to expert-defined reward shaping. In other words, having an existing dense reward function for Lunar Lander allows us to make a direct comparison of whether M_{LL} has learned good mappings between concepts and state-action pairs to effectively inform reward shaping.
**Q1: For reward shaping, how is the amount of reward determined/amount the reward function is changed by?**
Mentioned in Section 4.2 and 5.3, the actual amount of reward for the Lunar Lander domain is determined by the existing reward shaping function. Similarly for Connect 4, we perform a hyperparameter sweep to determine corresponding shaping values for each concept. In Appendix D.1 we have presented the shaping values for each concept for each domain. Note, we do not claim these are the most optimal shaping values, but the best that either currently exist (Lunar Lander) or were found via a hyperparameter sweep.
**Q2: In general, who would annotate the explanations, and what is the impact if there are multiple explanations for a single move/conflicting annotations?**
In general, concepts for any domain should be collected from experts within a domain. We will emphasize this in Section 4.1 and Section 5.1. As mentioned in Sec. 5.1, the concepts in our work are identified by access to expert-domain knowledge within the fields.
It is interesting to consider cases where “k” different concept-based explanations exist for a given state-action pair. While in our work, we did not consider these scenarios, we believe there are at least two possible avenues to employ as solutions without drastically changing the S2E framework. One solution is to perform an inter-rater reliability to determine if there is agreement among experts on a single concept-based explanation out of the “k” explanations being more preferable or “agreed” upon, and using such single explanation when training the joint embedding model . A second solution is to allow the joint embedding model to see state-action pairs be associated with all “k” different concept-based explanations. During retrieval, the top ranked explanation can be retrieved, and given a well trained joint embedding model, the retrieved explanation will be one of the “k” possible correct explanations. These solutions are an opportunity for future work and we will include this discussion in an appendix that is related to Section 5.1.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your clarifications. On the topic of the user evaluation my question was the following: Would the post-test agents not outperform the pre-test agents because the post-test agents see 3 more scored games, getting more practice? Therefore, would it not be more fair to have some set of agents practice -> explanation -> practice, and others do practice -> no explanation -> practice, then compare each?
---
Reply to Comment 1.1.1:
Title: Further Clarification on User Eval to Reviewer g9Rv
Comment: I appreciate your follow up. There seems to be an underlying misunderstanding that we did not catch earlier. In our user evaluation in Section 6, the RL agent’s policy in the user study is fixed and the agent is not learning in any way during our user study.
We’d like to make this distinction very clear. The S2E framework provides a dual benefit of concept-based explanations to the RL agent and end-user, but at completely different stages. When we discuss the positive effects of concept-based explanations from S2E on agent learning, that occurs *during the agent’s training time*. The impact of S2E on agent learning is discussed in Section 5. When we evaluate the benefit of concept-based explanations on end-users in Section 6, the RL agent has already been deployed, meaning the RL agent is already trained and its policy is fixed. Therefore, our user evaluation in Section 6 is focused on seeing how different explanation types of a fixed RL agent help participants improve their task performance and understanding. At the beginning of Section 6 we will reiterate this distinction to avoid future misunderstandings.
With the above clarifications, we’d like to now respond directly to your questions.
**Q: “Would the post-test agents not outperform the pre-test agents because the post-test agents see 3 more scored games, getting more practice? ..Would it not be more fair to have some set of agents practice -> explanation -> practice, and others do practice -> no explanation -> practice, then compare each?**
There are no “post-test agents” or “pre-test agents” and there is no RL agent learning or practicing in the study. Instead, human participants play games in the “post-test” and “pre-test” stages. Furthermore, all participants play the same number of games, and no user receives more practice.
Potential improvement in participant performance due to learning effects is accounted for in the metric. Please see our original rebuttal response on how our metric, ATS, takes user learning effects into consideration since the ATS metric measures the *difference* between the Pre-Test and Post-Test scores. In this manner, we account for participants’ initial performance on the “PreTest” and measure whether the explanations provided in the “Explanation stage” help to provide any *improvements* to their “Post Test” scores. Also, we’d like to point out that your question in the original review about having an additional baseline is still valid for human participants. That is, we could have had an additional baseline where human participants in the “Explanation Stage” received no feedback. However, as we mention in our original rebuttal response, our current study conditions are fair comparisons given that we account for learning effects as well as compare against relevant baselines utilized in prior work (Action-Based & Value Based Explanations).
Please let us know if the above clarifies your understanding of our user study or if you have any other clarifying questions. Thank you! | Summary: The paper proposes a framework to incorporate explanation concepts to sequential decision tasks. The framework can be applied both to the training of the agent by improving RL and to provide explanations to end-users during deployment. The framework is tested using two simple games, Connect 4 and Lunar Lander, where it shows it help improve training performance and user performance.
Strengths: The main strengths of the paper are:
1. It provides an unified framework for explanations which can be used both for training and deploying.
2. The explanation framework is based on user-understandable concepts and terms.
3. The paper provides a sound theoretical and empirical analysis of the proposed framework.
4. The paper performs user testing and subsequent data analysis in a methodologically sound manner, rare in NeurIPs papers.
Weaknesses: The main weaknesses of the paper are:
1. The paper condenses two much information in very little space, including impossible to read figures. It is very hard to read.
2. The paper fails to discuss how difficult will be to create frameworks for more complex scenarios where there are more and more complex concepts. For instance, consider a system making decisions about buying and selling stocks.
3. Because of 2, the ideas presented in the paper may never be applicable to real-world problems.
4. It provides a very brief description of how the concepts are actually created, stored, and verified.
5. The paper does not discuss situations where the explanation is wrong or inadequate for a situation (for instance, there is a better move), and how that impacts training and user performance.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How do you expect the framework to perform when it is applied to more complex tasks and domains, both in terms of describing the actual concepts and on the performance during training?
2. How hard is to identify, create, store, and debug concepts?
3. What kind of wrong explanations it produces? How often? How that impacts training and user performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The paper does not do a good job of discussing how it can scale to real-world applications. By not doing so, it failed in identifying an important limitation of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ppHL, we thank you for your valuable feedback. Below we provide responses to weaknesses and questions:
**W1:** Thank you for pointing out--we will increase our image sizes.
**W2 & W3:**
We believe that our S2E framework will be applicable to more complex scenarios; however, two components within the framework may need modifications for scalability. We will add the discussion of these two components and their importance in the discussion section.
First, with an increased number of concepts (i.e. Chess) coupled with some concepts likely to occur more rarely (i.e. castling in Chess), we hypothesize that leveraging more complex joint-embedding model architectures that favor few-shot learning may be necessary. Second, with more complex domains there may be a greater need for providing abstracted concept-based explanations to end-users. S2E currently includes temporal grouping and information filtering, which we observed to be important for improved user task performance in Lunar Lander. We believe such abstraction methods will continue to be important in other complex domains.
Overall, our evaluations in Lunar Lander and Connect 4 demonstrate the efficacy of the S2E framework. We believe its modular design (a joint embedding model learning decoupled from an explanation abstraction method) allows for implementing and validating extensions to both components important in providing a dual-benefit to agent and end-user.
**Q1:**
As mentioned in response to the weakness above, we expect our S2E framework to be applicable in more complex tasks and domains. We expect some modifications needed to the joint-embedding model architecture to provide accurate retrievals of concepts to state-action pairs. If the joint-embedding model can retrieve concepts with high Recall@1, then based on our current findings, S2E can provide accurate concept-based explanations that both benefit end-user understanding at deployment as well as improve agent learning during training time.
**W4:**
We will add the following to Section 5.1 as more details.In our work, each state is associated with relevant concepts by objective, mathematical representations of each concept (i.e. function for position over time, rule for existence of a three-in-a-row, etc). Therefore we verify that concepts are accurately paired by ensuring the mathematical rule for a concept to be paired to a state-action pair is met.
In the appendix we will also provide a more detailed guideline (with examples) on how we derive and verify concepts mathematically, and discuss how to derive and verify concepts that are collected via human annotation or commentary.
**Q2:**
In our work, we leverage prior expert domain knowledge in Lunar Lander and Connect 4 to derive concepts important to the games via mathematical rules. In many real-world applications, it may be infeasible to leverage mathematical rules for concept derivation. Instead, concepts can be collected via crowdsourcing, such as in [20 from paper], or obtained via “think-aloud” procedures mentioned in [14 from paper]. Note, when concepts are collected via domain expert labeling, such labels should be verified via inter-rater reliability tests to ensure consistency among concept representations. These details will be added to the Appendix section.
**W5:** + **Q3:"..What kind of wrong explanations..How often?...”**
Figure 3a and 3b show the accuracy of our trained joint embedding models in retrieving concept-based explanations within Lunar Lander and Connect 4. Specifically, Fig. 3b quantifies a breakdown of the incorrect concept-based explanation retrievals in Connect 4 on the joint embedding model’s test set. The learned joint embedding model for Lunar Lander has a near-perfect recall@1(99.9). In the Appendix, we will include example scenarios where the retrieved concept-based explanations are incorrect.
**Q3:“..How [wrong explanations] impacts training and user performance?..”**
Incorrect retrievals of concepts-based explanations for a given state-action pair can negatively impact agent training in S2E by incorrectly providing shaping rewards to the agent and in return impacting learned agent policy. However, our results in Fig. 4a and 4b show that the percentage of incorrect retrievals from our joint embedding models do not significantly impact the RL agent’s learning rate. In Fig. 4a when studying Connect 4, the reward shaping informed via S2E (MuZero + S2E-RS) results in a slightly lower learning rate than the upper bound expert-defined reward shaping (MuZero + E-RS). However, MuZero + S2E-RS still improves the agent’s learning rate by ~200 training steps compared to the SoTA baseline agent (MuZero + no-RS). Similarly, in Fig. 4b and 4c, we see that MuZero + S2E-RS performs similarly to the SoTA MuZero + E-RS, given that there exists expert-defined reward shaping for Lunar Lander. We will clarify this in Sec. 5.3.
With respect to user performance, concept-based explanations provided to the end-users during the “Explanation” stage are retrieved from our learned joint-embedding models. However, the number of incorrect retrievals was not significantly detrimental to user performance. As shown in Fig. 5, while the ground truth concept-based explanation condition had slightly greater ATS improvement, there is no significant difference in ATS improvements between the ground truth concept-based condition and its S2E counterpart. For Lunar Lander, we did not experiment with a ground truth concept-based condition given that our Lunar Lander joint-embedding model had near-perfect Recall@1 (99.9). We hypothesize that if we did perform such a comparison, we’d see no significant differences given the near-perfect Recall@1. We will clarify this in Sec. 6.1 and 6.2.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the information in the rebuttal.
I confirm I have read the rebuttals provided by the authors. | Summary: The authors propose a unified framework called State2Explanation (S2E) that combines learning a joint embedding model between state-action pairs and concept-based explanations. The authors draw inspiration from the Protégé Effect, which suggests that explaining knowledge reinforces self-learning. They propose that concept-based explanations can benefit both the RL agent and the end-user by improving the agent's learning rate and the end-user's understanding of the agent's decision making. The S2E framework is designed to inform reward shaping during an agent's training and provide explanations to end-users at deployment for improved task performance.Results on Connect 4 and Lunar Lander demonstrate the success of S2E in providing a dual benefit.
Strengths: The paper introduces a novel framework S2E. The authors suggest that explaining knowledge to the agent can improve its learning rate, while providing explanations to end-users can enhance their understanding of the agent's decision making. The framework is supposed to be useful in providing explanations understandable in various applications.
Weaknesses: There are no significant weaknesses from my point of view, while there are some improvements can be made. Although the paper mentions experimental validations in the Connect 4 and Lunar Lander domains, the proposed method should be tested on more complex tasks to test its effectness. The authors should also consider how concepts can be better selected beyond expert-defined thresholding.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - To be specific, how concept candidates are selected when performing InF?
- How thresholds are determined via qualitative methods? Is there any results showing that different concepts selected will influence the final performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer smxt, we thank you for your valuable feedback. Below we provide responses to weaknesses and questions:
**W: “..the proposed method should be tested on more complex tasks to test its effectiveness. The authors should also consider how concepts can be better selected beyond expert-defined thresholding.”**
We agree that more investigation of the applicability of S2E should be performed in other complex domains. However, we believe insights from Connect 4 and Lunar Lander are themselves valuable, as both domains are challenging and complex in their own ways. We chose Connect 4 given its large state space (~4 trillion unique states), and Lunar Lander for its complex continuous state space with actions sampled at high frequency, properties that make both domains challenging in RL as well as for providing explanations.
We believe S2E can be applicable in other settings, such as domains with more complex trajectory optimization (robotics) or complex concept representations (i,e. Go, Chess) with small modifications made to the joint embedding architecture to improve concept-explanation to state-action pair mappings. We agree that future work should investigate other automated methods for concept-filtering beyond expert-defined thresholding. We will add this to our limitations section.
Overall, we would like to highlight that given, to our knowledge, that this is the first work that explores the notion of a dual-benefit of concept-explanations to both the agent and end-user, we believe that our proposed S2E framework and its success in providing a dual benefit in Lunar Lander and Connect 4 is an important contribution towards understanding how concept-based explanations can be utilized to provide a dual benefit in other RL domains.
**Q1:To be specific, how concept candidates are selected when performing InF?**
In InF, the thresholds are defined through a qualitative analysis of the agent’s state values when rolling out the agent’s policy. Note, the state values that are analyzed are directly mapped to a concept(s). For example, in Lunar Lander we observe the agent’s “x-position” and “tilt” state values which correspond to concepts “position” and “tilt”. These details are provided in Appendix A.
For more details, we will add the following details to Appendix A: The derived thresholds signify the positive and negative turning points in the agent’s ability to reach G. In other words, these thresholds are expert-defined upper and lower bounds on the agent’s state values that denote the agent’s ability to succeed or fail in its goal. In our InF method, these thresholds are not mathematically derived, but are derived from RL-expert analysis. Specifically, an RL expert visualizes multiple policy rollouts while analyzing the different state values over time to manually determine upper and lower bounds (turning points) that influence the agent’s ability to reach G. In our discussion section we will include that future work involves automating the InF method.
**Q2:How thresholds are determined via qualitative methods? Is there any results showing that different concepts selected will influence the final performance?**
As mentioned in response to Q1, the thresholds are manually derived from RL-expert analysis. Specifically, an RL expert visualizes multiple policy rollouts while analyzing the different state values over time to determine approximate upper and lower bounds (turning points) that influence the agent’s ability to reach G. It is true that different thresholds can result in different abstracted, concept-based explanations.
In response to the rebuttal, we performed an additional study to analyze the sensitivity of our current chosen thresholds for the Lunar Lander domain. Check the PDF in the global review for the graphs. We will add these graphs to Appendix A as additional analyses. All graphs show what fraction of concepts are filtered (y-axis) as the threshold values change (x-axis). When looking the first graph, “X-position: Threshold Experimentation”, we see our chosen value is within the elbow of the curve, denoting that the rate of filtration of the “position” concepts slows down after 0.15. Similarly, in the “tilt” experimentation graphs (2nd and 3rd graphs), we see the lower and upper bound of the tilt thresholds to also be within the “elbow” in each curve. Note, in the upper bound threshold experimentation, the lower bound value is fixed, and vice versa. To study the impact of these threshold values, additional user studies need to be conducted on the utility of the different abstracted concept-based explanations that result from varying the experimental threshold values. However, we consider such analyses beyond the scope of our work, and one to further explore when improving the information filtering submodule within our S2E framework.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks to the authors for the information in the rebuttal. I have read the author's response and fellow reviewer's feedback. I believe that the authors have addressed most of my concerns. | Rebuttal 1:
Rebuttal: We thank all our reviewers for their detailed comments. Firstly, we are encouraged that reviewers saw the importance of our framework S2E in providing a dual benefit to the end-user as well as RL agent. Reviewer ppHL found our work provided “sound theoretical and empirical analysis”, and reviewer g9Rv saw strength in our “concrete desiderata for a concept”. Reviewer smxt believed in the novelty of our work, and reviewer GSRV agreed with the important need for a framework like S2E, and loved the “idea of the two [agent and user] playing off each other”. We are also glad that reviewers saw value in our analyses, in particular that our user studies demonstrated “real-world viability” (g9Rv), was “methodologically sound” (ppHL) and its “rigor” was appreciated (GSRV).
We found all feedback to be constructive and informative. We respond to each reviewer’s questions and weaknesses below. We believe that our responses should clarify and address reviewer concerns. If additional details or clarifications are needed, we are happy to provide them.
**Reviewer smxt**: please see attached graphs that support our response to your Q2.
Pdf: /pdf/0c88b8a926b40fb26d12bdac958809511428a92a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Boosting Spectral Clustering on Incomplete Data via Kernel Correction and Affinity Learning | Accept (poster) | Summary: This paper proposes an imputation-free framework with two novel approaches to improve spectral clustering on incomplete data. Firstly, the authors introduce a new kernel correction method that enhances the quality of the kernel matrix estimated on incomplete data with a theoretical guarantee, benefiting classical spectral clustering on pre-defined kernels. Secondly, they develop a new affinity learning method that equips the self-expressive framework with ℓp-norm to construct an intrinsic affinity matrix with adaptive extensions.
Strengths: The originality of this paper is satisfying, which proposes an imputation-free framework with two novel approaches to improve spectral clustering on incomplete data and the significance is OK.
The quality and clarity of this paper are satisfying based on the clear presentation of the imputation-free framework with two novel approaches.
Weaknesses: 1. The authors propose an imputation-free framework with two novel approaches to improve spectral clustering on incomplete data. However, the advantages of the proposed method in dealing with incomplete data are not clearly stated based on Section 3, i.e., considering how to recover the missing data. Thus, the authors are expected to analyze the merits of the proposed method in dealing with the incomplete data.
2. In section 3.1, the authors give different methods to provide a calibrated distance matrix with benefits for distance-based kernels and are not a universal solution for dealing with incomplete data in spectral clustering tasks. However, these separated methods may make the novelty of this paper separated and not focused. I think the authors can better stating the revisiting distance calibration methods in this part.
3. The adopted datasets in the paper are almost with small scales in the experiments. The authors can add one or more multi-view datasets with large scales for validating the clustering performance.
4. The improvements of the proposed method compared with other methods are not significant in the experiments, i.e., the RE_K of KC is just 0.217 on USPS.
5. The authors can add one or more recent methods for comparison in the experiment, which make the comparson of experimental results more comprehensively.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. The authors propose an imputation-free framework with two novel approaches to improve spectral clustering on incomplete data. However, the advantages of the proposed method in dealing with incomplete data are not clearly stated based on Section 3, i.e., considering how to recover the missing data. Thus, the authors are expected to analyze the merits of the proposed method in dealing with the incomplete data.
2. In section 3.1, the authors give different methods to provide a calibrated distance matrix with benefits for distance-based kernels and are not a universal solution for dealing with incomplete data in spectral clustering tasks. However, these separated methods may make the novelty of this paper separated and not focused. I think the authors can better stating the revisiting distance calibration methods in this part.
3. The adopted datasets in the paper are almost with small scales in the experiments. The authors can add one or more multi-view datasets with large scales for validating the clustering performance.
4. The improvements of the proposed method compared with other methods are not significant in the experiments, i.e., the RE_K of KC is just 0.217 on USPS.
5. The authors can add one or more recent methods for comparison in the experiment, which make the comparson of experimental results more comprehensively.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: 1. The authors propose an imputation-free framework with two novel approaches to improve spectral clustering on incomplete data. However, the advantages of the proposed method in dealing with incomplete data are not clearly stated based on Section 3, i.e., considering how to recover the missing data. Thus, the authors are expected to analyze the merits of the proposed method in dealing with the incomplete data.
2. In section 3.1, the authors give different methods to provide a calibrated distance matrix with benefits for distance-based kernels and are not a universal solution for dealing with incomplete data in spectral clustering tasks. However, these separated methods may make the novelty of this paper separated and not focused. I think the authors can better stating the revisiting distance calibration methods in this part.
3. The adopted datasets in the paper are almost with small scales in the experiments. The authors can add one or more multi-view datasets with large scales for validating the clustering performance.
4. The improvements of the proposed method compared with other methods are not significant in the experiments, i.e., the RE_K of KC is just 0.217 on USPS.
5. The authors can add one or more recent methods for comparison in the experiment, which make the comparson of experimental results more comprehensively.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer hJi1**
Thank you very much for your positive feedback on the originality of our proposed method and its significance. We are delighted to hear that you found *our work satisfying in terms of quality and clarity*. Your comments are greatly appreciated and will help us further improve our work, especially for the presentation and evaluation.
---
**Comment 1**: The advantages of the proposed method in dealing with incomplete data are not clearly stated based on Section 3, i.e., considering how to recover the missing data. Thus, the authors are expected to analyze the merits of the proposed method in dealing with the incomplete data.
**Response 1**: Thanks for the suggestion. We do agree that including a discussion on the merits and challenges between imputation methods and imputation-free methods would be beneficial to the readers. We will try to add such a discussion.
---
**Comment 2**: I think the authors can better stating the revisiting distance calibration methods in this part.
**Response 2**: We appreciate your valuable feedback. It will be revised accordingly.
---
**Comment 3**: The authors can add one or more multi-view datasets with large scales for validating the clustering performance.
**Response 3**: Thanks for your suggestion. We will evaluate the work on larger datasets, including multi-view datasets. (Due to limited time, we were unable to finish such an evaluation in the rebuttal stage.) If any suitable multi-view datasets can be recommended, we will appreciate it.
---
**Comment 4**: The improvements of the proposed method compared with other methods are not significant in the experiments, i.e., the RE_K of KC is just 0.217 on USPS.
**Response 4**: We agree that the improvements under some settings are not that significant, while the improvements are consistent under all settings.
---
**Comment 5**: The authors can add one or more recent methods for comparison in the experiment, which make the comparson of experimental results more comprehensively.
**Response 5**: As you suggested, we included an additional comparison method, Polynomial Matrix Completion (PMC) [1], in our experiments. The results presented in Table 1 demonstrate that our KC method consistently outperforms the PMC method. Specifically, the KC method achieves better distance and kernel estimation with smaller relative errors (RE_D and RE_K), resulting in improved clustering performance with higher ACC, NMI, and PUR scores. These findings further validate the effectiveness of our KC method in comparison to recent imputation techniques.
**Table 1**: Comparison of PMC and KC on distance estimation, kernel estimation and standard spectral clustering (SC) with Gaussian kernels for incomplete datasets under a missing ratio of 80%.
| Dataset-Method | Yale64-PMC | Yale64-KC | Umist-PMC | Umist-KC | USPS-PMC | USPS-KC | Mfeat-PMC | Mfeat-KC |
|-|-|-|-|-|-|-|-|-|
|RE_D $\downarrow$|0.147|**0.053**|0.325|**0.070**|0.314|**0.132**|0.368|**0.095**|
|RE_K $\downarrow$|0.152|**0.089**|0.457|**0.117**|0.304|**0.217**|0.360|**0.168**|
|SC-ACC $\uparrow$|0.548|**0.578**|0.397|**0.463**|0.476|**0.523**|0.508|**0.761**|
|SC-NMI $\uparrow$|0.574|**0.596**|0.508|**0.673**|0.466|**0.472**|0.469|**0.758**|
|SC-PUR $\uparrow$|0.555|**0.584**|0.473|**0.553**|0.556|**0.609**|0.526|**0.804**|
[1] Fan, J., et al. "Polynomial matrix completion for missing data imputation and transductive learning." AAAI, 2020.
---
Rebuttal Comment 1.1:
Comment: I appreciate the replies from the authors and keep my rating after reading the replies.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Reply and Welcome Further Discussion
Comment: Thank you for taking the time to read the replies and we value your feedback. If you have any further questions or concerns, please feel free to let us know. | Summary: This paper studies the spectral clustering problem when there is missing data. The paper proposes a new algorithm for correcting the computed kernel matrix by projecting the matrix to the nearest symmetric PSD matrix, and using this corrected kernel for clustering. The paper also combines the new kernel correction algorithm with affinity learning to develop a new technique for learning affinity matrices for incomplete data.
Experimental evaluations show that the newly developed techniques outperform alternative methods for handing incomplete data with respect to standard clustering metrics.
Strengths: The newly proposed algorithm for kernel correction is novel and interesting. It has potential applications and could inspire future research directions. The algorithm is conceptually quite simple and captures the theoretical intuition quite naturally. The experimental results suggest that the new algorithm outperforms alternative methods.
Weaknesses: The running time of the kernel correction algorithm is quite slow - it requires computing the spectral decomposition of the kernel matrix at every iteration. It is good that the authors discuss this limitation in the paper, although it is a weakness of the algorithm. Improving the running time could be a future research direction.
It would be interesting to compare experimentally the benefit that kernel correction brings over simply doing nothing to correct the missing data (or doing something very naive to construct the k-NN graph for example).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Every iteration of the KC algorithm requires computing the spectral decomposition. Given that only the top k eigenvectors are needed for spectral clustering, could it be possible to speed up the algorithm by, for example, computing only k eigenvectors at each iteration?
What is the performance experimentally of performing spectral clustering on the naive k-NN graph constructed from the incomplete data? Given that this will be much faster than running KC, it would be interesting to see the trade-off between running time and accuracy.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer GE1q**
Thanks for your positive feedback on the novelty of our proposed method and its potential applications for future research directions. Your comments are greatly appreciated and will help us further improve our work, especially for convincing empirical evaluation.
---
**Question 1**: Given that only the top k eigenvectors are needed for spectral clustering, could it be possible to speed up the algorithm by, for example, computing only k eigenvectors at each iteration?
**Answer 1**: We appreciate your insightful suggestion. To enhance the efficiency of the KC algorithm, we explored the use of randomized singular value decomposition (rSVD) [1] - a method that focuses on identifying the top-$k$ singular values (refer to Section 3.3). By replacing the spectral decomposition (SD) with rSVD, we were able to significantly reduce the time complexity of the algorithm from $O(n^3)$ to $O(n^2 \cdot \log(k) + 2n \cdot k^2)$ while ensuring accurate decomposition.
We compared the performance of the KC method with SD or rSVD on two large datasets, USPS-1k and Mfeat-2K. The results, presented in Table 1(a) and 1(b) respectively, reveal that rSVD greatly enhances the operational efficiency of the KC algorithm. However, it is crucial to choose an appropriate value for $k$ (number of top singular values), as it heavily impacts the quality of clustering. Smaller $k$ values (e.g., 10 or 20) result in poorer clustering due to the loss of important singular value information. On the other hand, selecting a suitable $k$ value (e.g., 50 or 100) significantly reduces runtime while maintaining clustering performance comparable to SD, particularly for the standard spectral clustering algorithm (SC). Striking a trade-off between efficiency and performance remains an intriguing avenue for future research.
**Table 1(a)**: Comparison of rSVD and SD on kernel correction and spectral clustering for the incomplete *USPS-1K* dataset under a missing ratio of 80%. Note that *RE_K* denotes the relative error of the corrected Gaussian kernel, *SC* denotes the standard spectral clustering algorithm, and *KLSR* denotes the kernel least-squares representation algorithm.
|Method|rSVD (k=10)|rSVD (k=20)|rSVD (k=50)|rSVD (k=100)|SD|
|-|-|-|-|-|-|
|RE_K|0.695|0.419|0.262|0.291|**0.217**|
|SC-ACC|0.153|0.290|0.494|0.509|**0.523**|
|SC-NMI|0.026|0.164|0.431|0.439|**0.472**|
|SC-PUR|0.189|0.332|0.572|0.582|**0.609**|
|KLSR-ACC|0.147|0.266|0.317|0.148|**0.528**|
|KLSR-NMI|0.021|0.122|0.210|0.018|**0.485**|
|KLSR-PUR|0.173|0.296|0.389|0.176|**0.627**|
|*Time (sec)*|**1.96**|2.12|3.10|5.76|72.97|
**Table 1(b)**: Comparison of rSVD and SD on kernel correction and spectral clustering for the incomplete *Mfeat-2K* dataset under a missing ratio of 80%.
|Method|rSVD (k=10)|rSVD (k=20)|rSVD (k=50)|rSVD (k=100)|SD|
|-|-|-|-|-|-|
|RE_K|0.681|0.474|0.223|0.201|**0.168**|
|SC-ACC|0.142|0.201|0.715|0.721|**0.761**|
|SC-NMI|0.017|0.071|0.667|0.733|**0.758**|
|SC-PUR|0.148|0.213|0.748|0.777|**0.804**|
|KLSR-ACC|0.136|0.178|0.629|0.606|**0.705**|
|KLSR-NMI|0.012|0.045|0.563|0.613|**0.712**|
|KLSR-PUR|0.140|0.184|0.675|0.686|**0.755**|
|*Time (sec)*|**6.82**|7.28|9.08|13.98|390.84|
[1] Halko N., et al. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM Review, 2011.
---
**Question 2**: What is the performance experimentally of performing spectral clustering on the naive k-NN graph constructed from the incomplete data?
**Answer 2**: As you suggested, we have compared the naive kNN graph and our KC method on four benchmark datasets. The results, presented in Table 2, show that the KC method surpasses the naive kNN graph across multiple aspects.
- The KC method demonstrates improved accuracy in distance estimation and kernel estimation, as evidenced by smaller relative errors, i.e., RE_D and RE_K.
- KC exhibits a better local relationship in the kNN graph, as indicated by a higher Recall value.
- When applying SC, KSSC, and KLSR clustering algorithms, the KC method consistently outperforms the naive kNN graph in terms of ACC, NMI, and PUR metrics.
Notably, while the naive kNN graph boasts a significantly faster runtime, its clustering performance is unstable and can be notably compromised when utilizing the KSSC or KLSR algorithms due to substantial errors in kernel estimation.
**Table 2**: Comparison of the naive kNN graph and KC method on spectral clustering with standard Gaussian kernels (SC) and self-expressive affinity learning (KSSC, KLSR) for incomplete datasets under a missing ratio of 80%.
| Dataset-Method | Yale64-Naive | Yale64-KC | Umist-Naive | Umist-KC | USPS-Naive | USPS-KC | Mfeat-Naive | Mfeat-KC |
|-|-|-|-|-|-|-|-|-|
| RE_D | 0.064 | **0.053** | 0.107 | **0.070** | 0.268 | **0.132** | 0.173 | **0.095** |
| RE_K | 0.113 | **0.089** | 0.189 | **0.117** | 0.460 | **0.217** | 0.312 | **0.168** |
| Recall | 0.721 | **0.767** | 0.726 | **0.785** | 0.071 | **0.197** | 0.246 | **0.286** |
| SC-ACC | 0.561 | **0.578** | 0.462 | **0.463** | 0.343 | **0.523** | 0.728 | **0.761** |
| SC-NMI | 0.588 | **0.596** | 0.669 | **0.673** | 0.222 | **0.472** | 0.740 | **0.758** |
| SC-PUR | 0.572 | **0.584** | 0.549 | **0.553** | 0.395 | **0.609** | 0.782 | **0.804** |
| KSSC-ACC | 0.190 | **0.586** | 0.119 | **0.496** | 0.152 | **0.427** | 0.121 | **0.713** |
| KSSC-NMI | 0.219 | **0.616** | 0.101 | **0.714** | 0.018 | **0.360** | 0.010 | **0.648** |
| KSSC-PUR | 0.198 | **0.601** | 0.131 | **0.584** | 0.171 | **0.529** | 0.124 | **0.750** |
| KLSR-ACC| 0.582 | **0.607** | 0.485 | **0.488** | 0.144 | **0.528** | 0.205 | **0.705** |
| KLSR-NMI| 0.606 | **0.616** | 0.676 | **0.696** | 0.019 | **0.485** | 0.078 | **0.712** |
| KLSR-PUR| 0.592 | **0.613** | 0.568 | **0.572** | 0.176 | **0.627** | 0.217 | **0.755** |
| *Time (sec)*| 0.18 | **0.03** | **0.54** | 8.05 | **0.60** | 72.97 | **3.76** | 390.84 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I am pleased to see the additional experimental results based on computing fewer singular vectors, and a comparison with the naive algorithm. The trade-off between running time and performance in both experimental results is interesting and I feel is worth including in the next version of the paper.
Based on this response, I am happy to increase my rating to 7.
---
Reply to Comment 1.1.1:
Title: Many Thanks to Your Great Support
Comment: Thank you for your valuable comments. We will include these additional results in the paper revision, as well as explore the trade-off between efficiency and performance in future research. Your feedback has been instrumental in enhancing the quality of our work, and we are grateful for your support. | Summary: The authors introduce an imputation-free framework for correcting a kernel obtained from incomplete data. They propose the corrected kernel to be a PSD matrix with bounded elements that is closest to the initial kernel (calculated from incomplete data) in Frobenius norm. They show that the corrected kernel is guaranteed to be closer to the ground truth than the initial kernel. In the case of the Gaussian kernel, Dykstra's projection algorithm is presented to obtain the corrected kernel. They then extend the existing self-expressive affinity learning framework by incorporating the proximal p-norm (0 < p < 1) and the Schatten p-norm (½ < p < 1) penalties on the affinity matrix. An algorithm (KSL-Pp) based on augmented Lagrangian and ADMM is proposed for the proximal p-norm penalty. Another algorithm (KSL-Sp) based on gradient descent is proposed for the Schatten p-norm penalty. Finally, the authors combine the kernel correction and the self-expressive affinity learning frameworks to jointly learn the corrected kernel and the affinity matrix. An algorithm (AKLSR) based on augmented Lagrangian and ADDM is proposed to solve the joint optimization problem.
For numerical experiments, they perform a comparative analysis on synthetic and image datasets and show their proposed method produces a more accurate Gaussian kernel than the existing techniques. The performance of spectral clustering improves when their proposed corrected kernel is used against the corrected kernels obtained by competing techniques. For self expressive affinity learning, they show that more accurate affinity can be estimated when their proposed corrected kernel is used against the corrected kernels obtained by other techniques.
Strengths: Significance: The Gaussian kernel plays a central role in several tasks including clustering, dimensionality reduction, graph neural networks etc. A comparative analysis on image datasets is provided that showed improved quality of the corrected kernel using the proposed approach against those obtained by existing techniques. The same is also reflected in the improved performance of spectral clustering and the improved quality of self-expressive affinity matrices based on their proposed corrected kernel. On a high level, these improvements could be helpful in downstream applications.
Quality: The authors adapted Dykstra's projection algorithm which comes with linear convergence guarantees, to correct the Gaussian kernel obtained from incomplete data.
Weaknesses: Quality / clarity:
The authors extended the existing self-expressive affinity learning framework by incorporating p-norm based penalties (0 < p < 1). However, it is not clear how p-norm based penalties are more effective than the conventional 1-norm, Frobenius norm and nuclear norm based penalties. The provided empirical analysis seems insufficient to reach any conclusion.
The authors also proposed an algorithm that jointly optimizes the corrected kernel and the self-expressive affinity matrices using ADMM. However, there seems to be some issue in the formulation (see questions), and no empirical analysis is provided. This gives the sense that the paper is incomplete: the authors propose Adaptive Kernel Self-expressive Learning in 4.2 to tie between the kernel correction in section 3 and the self expressive affinity in 4.1, however do not include any simulations for this approach. Without demonstrating its efficacy, the inclusion of the proposed algorithm AKLSR seems unnecessary, and then the paper is essentially two unrelated approaches (kernel correction, self expressive learning) grouped together.
Missing references:
* Gilbert, A. C., & Jain, L. (2017, October). If it ain't broke, don't fix it: Sparse metric repair. In 2017 55th Annual Allerton Conference on Communication, Control, and Computing
* Shahid, N., Kalofolias, V., Bresson, X., Bronstein, M., & Vandergheynst, P. (2015). Robust principal component analysis on graphs. In Proceedings of the IEEE International Conference on Computer Vision
* Biswas, Arijit, and David W. Jacobs. "An Efficient Algorithm for Learning Distances that Obey the Triangle Inequality." BMVC. 2015.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Major comments:
* Line 42. “Incomplete data” - the authors never define what incomplete setting they are solving, I assume missing at random, but this should be clarified.
* Line 128 “With a small missing ratio of features, $D_0$ already satisfies most triangle inequalities,..., then the algorithm typically yields only marginal improvement” - wouldn’t this be true of all methods? If the ratio of missing features is low, the impact on kernel construction is small and fixing it should yield marginal improvement.
* line 140, an explanation of “the quality of $\hat{D}$ cannot be guaranteed” would be helpful.
* Line 221 “the corrected distance obtained from the corrected Gaussian kernel will also be more accurate than the calibrated distance from the Euclidean embedding method.” is there a proof for this or just empirical evidence in the simulation?
* Why do the elements of \hat{C} on line 11 and 10 of algorithms 2 and 3 lie in [0,1]?
* The constraints on K in Eq. (9), seem incorrect. Why are they not consistent with those in Eq.(5)? Why is there no constraint that K is PSD?
* Line 201: O(n^2) storage limits the size of data that can be analyzed. Are the estimated kernels dense or sparse?
* Is the inverse of the formula for k_{ij} on line 260 used to obtain d_{ij}?
* The future work on line 287 “with future work on the potential of deep learning clustering techniques” should be made a bit more clear.
Simulations: My main concern is with the experiments.
1. The asymptotic time complexity provided by the authors is appreciated. It would be helpful if the actual time taken (in seconds) by the kernel correction and the affinity learning algorithms are also reported.
2. Numerical experiments for KLRR and KSL-Pp in Table 3 are missing.
3. Numerical experiments for KLRR, KSL-Sp and KSL-Pp in Table 3 and 4 in the supplementary material are missing.
4. Numerical experiments for the proposed AKLSR algorithm that jointly corrects the kernel and learns the affinity matrix are missing.
5. Gains are modest compared to EE - how do the two compare in terms of runtime/storage?
6. A popular measure to evaluate clustering is the adjusted Rand index (ARI). The authors should add this since it is more informative than ACC.
7. An additional non-image dataset would be useful to add if there is time (word-document, recommendation systems).
8. std values are missing from all the tables.
9. the experiments are performed only for 80% missing values. A wider range of missingness should be explored (plot can be used instead of tables) to demonstrate how robust the method is.
10. How are the hyperparameters of competing methods determined?
Minor comments:
* Although it seems intuitive, a reference for line 22 “incomplete data is commonly seen in practice, leading to inaccurate affinities and degraded clustering performance” would be helpful.
* A minus is missing in the formula for K_{ij} on line 87.
* The presentation in section 3 of “first method, “second method”,... seems odd. Are these the only relevant algorithms? Also, instead of using “first method” as a paragraph title, it would be better to use the name of the algorithm for the title of the paragraphs starting on lines 122, 130 and 142.
* A more precise reference (theorem number etc.) of Property 2 on line 146 would be helpful.
* Theorem 1 on line 193 may be placed after equation 5 as it holds in the general setting.
* The transition to section 4 is rather abrupt and 1-2 introductory sentences can help better tie the paper together.
* Which algorithm (SVD or rSVD) in Section 3.3 is used in the numerical experiments?
* What value of p for KSL-Sp used in Section 5.3?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitations aren't discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer 7MjD**
Thanks for your valuable comments on our work. We greatly appreciate the time and effort you have dedicated to thoroughly evaluating our paper and providing detailed feedback. We will modify it accordingly.
---
**Comment 1**: It is not clear how p-norm based penalties are more effective than the conventional 1-norm, Frobenius norm and nuclear norm based penalties.
**Answer 1**: Theoretically, p-norm is a generalization of the L1 norm, offering a flexible sparsity control with a range of sparsity levels. Moreover, p-norm penalties exhibit robustness to outliers due to their smooth and continuous penalty term. Additional comparison results of different norms will be included in the revision to further validate these advantageous properties.
---
**Comment 2**: Missing references.
**Answer 2**: We will cite these important references accordingly.
---
**Comment 3-1**: On the results of KSL-Pp and AKLSR algorithm.
**Answer 3-1**: Sorry for any confusion caused by missing results. In practice, the KSL-Pp method requires handling numerous hyper-parameters and involves a non-convex optimization process during the Z-update step, making it difficult to effectively utilize. To address this limitation, we propose two extensions with the KSL-Sp and AKLSR algorithms. We haved conducted the experiments of AKLSR on two datasets as shown in Table 1, which partially validates the effectiveness of KC and AKLSR method. More results will be included in the revision.
**Table 1(a)**: Performance of the AKLSR algorithm on Yale64 and Umist datasets under a missing ratio of 80%.
|Dataset-Metric|ZERO|MEAN|kNN|EM|SVT|FNNM|GR|KFMC|DC|TRF|EE|KC|
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Yale64-NMI|0.332|0.610|0.334|0.602|0.312|0.614|0.341|0.307|0.564|0.385|0.577|**0.617**|
|Yale64-PUR|0.300|0.591|0.312|0.581|0.281|0.602|0.307|0.290|0.546|0.348|0.555|**0.592**|
|Yale64-ARI|0.071|0.377|0.079|0.373|0.057|0.382|0.074|0.051|0.321|0.116|0.329|**0.383**|
|Umist-NMI|0.501|0.632|0.507|0.629|0.503|0.495|0.681|0.466|0.625|0.126|0.642|**0.687**|
|Umist-PUR|0.428|0.519|0.419|0.521|0.425|0.418|0.567|0.424|0.525|0.145|0.540|**0.590**|
|Umist-ARI|0.225|0.347|0.217|0.343|0.219|0.224|0.402|0.167|0.344|0.002|0.345|**0.398**|
**Table 1(b)**: Comparison of AKLSR and SC algorithms on Yale64 and Umist datasets under a missing ratio of 80%.
||Yale64-SC|Yale64-AKLSR|Umist-SC|Umist-AKLSR|
|-|-|-|-|-|
|NMI|0.596|**0.617**|0.673|**0.687**|
|PUR|0.584|**0.592**|0.553|**0.590**|
|ARI|0.366|**0.383**|0.377|**0.398**|
---
**Comment 3-2**: On comparison of EE and KC.
**Answer 3-2**: While the KC method has comparable runtime and storage requirements, it offers the advantage of being applicable to a wide range of kernels. On the other hand, the EE method is limited to Laplacian kernels and relies on stricter assumptions, resulting in a narrower scope of applications.
---
**Comment 3-3**: On the ARI metric, std values, and more missingness.
**Answer 3-3**: Thanks for your suggestion and we will add these results in the revision.
---
**Comment 3-4**: On experiments of a non-image dataset.
**Answer 3-4**: We included an additional speech dataset, Isolet, in the Supplementary and also showed the superiority of our methods.
---
**Comment 4-1**: Line 221. Is there a proof for this or just empirical evidence in the simulation?
**Answer 4-1**: The claim is supported by empirical evidence from our experiments and is an intuitive observation. We will modify the claim more rigorously and seek to find a theoretical proof in future work.
---
**Comment 4-2**: Why do the elements of \hat{C} on line 11 and 10 of algorithms 2 and 3 lie in [0,1]?
**Answer 4-2**: Taking Algorithm 2 as an example, the constraint of $c_{ij} \in [0,1]$ in Eq. (6) has been incorporated through the term $\sum_{i,j} \max(z_{ij}-1,0)^2$ in the augmented Lagrangian function defined in Eq. (8). Consequently, when solving the augmented Lagrangian function using ADMM, the Z-update step largely guarantees the satisfaction of this constraint.
---
**Comment 4-3**: The constraints on K in Eq. (9), seem incorrect. Why are they not consistent with those in Eq.(5)? Why is there no constraint that K is PSD?
**Answer 4-3**: We apologize for any confusion caused by the inconsistency in Eq. (9). To construct a suitable Lagrangian function, we simplified the formulation and utilized a procedure that starts with an initial estimate $K^0$ without explicitly enforcing the PSD constraint. However, we recognize the need for a more rigorous investigation and a solution that incorporates the PSD constraint appropriately. We appreciate your feedback and will make sure to clarify this in our work.
---
**Comment 4-4**: Line 201: O(n^2) storage limits the size of data that can be analyzed. Are the estimated kernels dense or sparse?
**Answer 4-4**: Thanks for your suggestion. In practice, both the initial estimate $K^0$ and the corrected $\hat{K}$ are dense.
---
**Comment 4-5**: Is the inverse of the formula for k_{ij} on line 260 used to obtain d_{ij}?
**Answer 4-5**: Exactly. We obtain $d_{ij} = \sqrt{-\sigma^2 \log(k_{ij})}$ due to $k_{ij} = \exp(-d_{ij}^2/\sigma^2)$, where $\sigma = \text{median}\\{d_{ij}\\}$.
---
**Comment 4-6**: Which algorithm (SVD or rSVD) in Section 3.3 is used in the numerical experiments?
**Answer 4-6**: In our experiments, we used the spectral decomposition in the KC method. We will clarify it and include results of extensions with rSVD in the revision.
---
**Comment 4-7**: What value of p for KSL-Sp used in Section 5.3?
**Answer 4-7**: In practice, there is no need to explicitly specify the value of $p$ in the KSL-Sp method. Theoretically, we utilized the finding in previous work that for $\frac{1}{2} < p < 1$, $ ||C||_{S_p} = \min \frac{||U||_F^2 + ||V||_F^2}{2} $ with a constraint of $C = UV^{\top}$ holds true. Based on this, we constructed a relaxed Lagrangian function in Line 235 that does not involve the parameter $p$.
---
Rebuttal Comment 1.1:
Title: More clarification
Comment: I thank the authors for their replies.
Regarding the results in table 1(a) - AKLSR performs worse than KSL-Sp - what is the advantage then?
Are these averaged over multiple runs?
I encourage the authors to include a result on a wider range of missingness values - it is hard to evaluate the performance based on a single value. it would be good to know how it performs in low/high missing rates
---
Reply to Comment 1.1.1:
Title: More Detailed Comparisons
Comment: Thanks for your feedback and suggestions. We acknowledge your willingness to participate in further discussions. The results in Table 1 are the average values for five runs. We will include these results in the revision.
---
**Comparison of KSL-Sp and AKLSR:**
When comparing KSL-Sp to AKLSR, we observe that AKLSR exhibits a significantly faster running speed. In Algorithm 4, each update step of AKLSR is expressed explicitly, leading to faster iterative convergence. However, KSL-Sp in Algorithm 3 requires solving through gradient descent to update $U$, resulting in longer convergence time.
Additionally, AKLSR can be further enhanced by incorporating a PSD constraint, resulting in AKLSR-PSD. Specifically, by introducing a spectral decomposition step after the $K$-update in Algorithm 4, we ensure that the kernel matrix $K$ used for updating the affinity matrix $C$ is PSD. The results presented in Table 2 provide partial validation of the efficiency of AKLSR and the effectiveness of AKLSR-PSD on the Umist dataset.
**Table 2**: Comparison of SC, KSL-Sp, AKLSR, and AKLSR-PSD algorithms on the Umist dataset under a missing ratio 80%. We consider two different kernel matrices as inputs: the true Gaussian kernel matrix (TRUE) and the corrected kernel matrix (KC). We measure the time required to obtain a self-expressive affinity matrix from a given kernel matrix. All results are the average values for five runs.
|Method|SC|KSL-Sp|AKLSR|AKLSR-PSD|
|-|-|-|-|-|
|TRUE-NMI|0.728|**0.778**|0.746|0.753|
|TRUE-PUR|0.602|**0.674**|0.629|0.652|
|TRUE-ARI|0.443|**0.563**|0.477|0.497|
|TRUE-Time (sec)|-|12.574|**0.041**|0.254|
|KC-NMI|0.673|**0.698**|0.687|0.691|
|KC-PUR|0.553|0.592|0.590|**0.596**|
|KC-ARI|0.377|0.395|0.398|**0.403**|
|KC-Time (sec)|-|12.572|**0.039**|0.233|
---
**More Results on Different Missingness:**
As you suggested, we have compared the performance on the Umist dataset under a wider range of missingness. The results, presented in Table 3, highlight the superiority of the KC method over baselines, particularly for scenarios with a high missing ratio.
- In terms of distance estimation and kernel estimation accuracy, the KC method consistently outperforms other methods, as indicated by smaller relative errors (RE_D and RE_K). Moreover, the KC method exhibits a stronger local relationship in the kNN graph, as evidenced by higher Recall values.
- In cases where the missing ratio is small, the KC method has considerable performance or incremental improvement compard to imputation methods.
- When dealing with a large missing ratio, imputation methods struggle due to the limited availability of observed data. This limitation increases the estimation error and leads to inaccurate kernel matrices, resulting in a significant degradation of clustering performance. In contrast, the KC method excels in terms of smaller errors and more stable clustering performance.
**Table 3**: Performance of distance estimation, kernel estimation, and standard spectral clustering (SC) on the Umist dataset with a wide range of missing ratios, i.e., {20%, 50%, 80%}. Note that RE_D (RE_K) denotes the relative error of the corrected Euclidean distance (Gaussian kernel). All results are the average values for five runs.
|Method|ZERO|MEAN|kNN|EM|SVT|FNNM|GR|KFMC|DC|TRF|EE|KC|rank|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|RE\_D-20\%|0.087|0.112|0.055|0.138|0.088|0.096|0.055|0.029|0.110|0.016|0.017|**0.016**|1|
|RE\_D-50\%|0.237|0.302|0.222|0.281|0.237|0.254|0.144|0.090|0.372|0.037|0.035|**0.033**|1|
|RE\_D-80\%|0.487|0.561|0.492|0.561|0.487|0.487|0.309|0.400|1.122|0.102|0.071|**0.070**|1|
|RE\_K-20\%|0.116|0.072|0.023|0.108|0.106|0.068|0.043|0.025|0.072|0.028|0.029|**0.028**|2|
|RE\_K-50\%|0.232|0.157|0.115|0.180|0.232|0.180|0.111|0.076|0.162|0.065|0.062|**0.057**|1|
|RE\_K-80\%|0.319|0.234|0.295|0.233|0.319|0.319|0.253|0.369|0.250|0.181|0.125|**0.117**|1|
|Recall-20\%|0.887|0.912|0.952|0.870|0.887|0.921|0.944|0.964|0.932|0.954|0.950|**0.956**|2|
|Recall-50\%|0.558|0.672|0.778|0.630|0.558|0.680|0.856|0.910|0.783|0.899|0.907|**0.914**|1|
|Recall-80\%|0.092|0.171|0.119|0.172|0.092|0.092|0.596|0.248|0.226|0.740|0.771|**0.785**|1|
|SC-ARI-20\%|0.370|0.405|0.451|0.388|0.380|0.408|0.436|0.439|0.427|0.456|0.466|**0.460**|2|
|SC-ARI-50\%|0.258|0.332|0.326|0.325|0.256|0.291|0.398|0.448|0.361|0.434|0.450|**0.451**|1|
|SC-ARI-80\%|0.070|0.206|0.082|0.207|0.067|0.069|0.304|0.140|0.216|0.370|0.371|**0.377**|1| | Summary: The paper proposes a new kernel correction method to address the issue of incomplete data. Existing approaches aim to recover the distance matrix of complete data, starting from that of the incomplete data. In contrast, the proposed method (Section 3.2) formulates the problem as finding a positive semi-definite matrix that is closest to kernel matrix of incomplete data. The optimisation is solved using a iterative approach.
The paper further extends the approach to (kernel) affinity learning problems to propose 3 further algorithms, and finally, experimentally shows that the proposed methods outperform existing distance completion methods in the setting of kernel spectral clustering.
Strengths: - The proposed approach is technically sound, and based on a rather intuitively simple idea of projecting onto the space of positive semi-definite matrices.
- Extensions to affinity learning are proposed
- The experiments show clear improvement over existing approaches for completing distance matrices
Weaknesses: Literature:
The paper focus only on kernel spectral clustering and affinity learning, and hence, misses the broad and older literature on kernel methods for supervised learning. A quick search of Google scholar reveals a considerable literature on this topic. Few papers are noted but the literature is quite large, and it is not clear why the paper does not compare with such approaches.
- Smola, Alex J., S. V. N. Vishwanathan, and Thomas Hofmann. "Kernel methods for missing variables." International Workshop on Artificial Intelligence and Statistics. PMLR, 2005.
- Dick, U., Haider, P., & Scheffer, T. (2008, July). Learning from incomplete data with infinite imputations. In Proceedings of the 25th international conference on Machine learning (pp. 232-239).
- Liu, X., Zhu, X., Li, M., Wang, L., Zhu, E., Liu, T., ... & Gao, W. (2019). Multiple kernel $ k $ k-means with incomplete kernels. IEEE transactions on pattern analysis and machine intelligence, 42(5), 1191-1204.
The paper needs to positioned well in context of this literature, and the proposed algorithm should be compared with existing kernel completion methods.
Weak theory, and insufficient experiments on different missing data:
Theorem 1 is rather weak since it only guarantees that if the incomplete kernel matrix was derived from a some true psd matrix, then the returned projection is closer to ground truth that the incomplete kernel matrix. However, the algorithm/theorem does not guarantee that the output kernel matrix is significantly better than the given incomplete one. A general guarantee cannot be provided without assumptions.
However, the work would significantly improve if there is a guarantee on recovery of ground truth kernel assuming that incomplete kernel matrix is obtained from data with features missing uniformly at random (setting considered in experiments)
On a similar note, the experiments only assume that features/values a missing at random. However, in practice, data is often systematically missing (some features tend to have higher missing values rate; there is often correlation between missing entries). Extensive experiments are needed to demonstrate that the proposed methods are robust to different types of missing entries, other than uniformly random.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The fundamental goal of the paper seems to be kernel completion. What is special about spectral clustering (or its affinity learning) in this context? Why cannot one use the methods for supervised learning?
- It is well known that spectral clustering does not require a positive semi-definite kernel (for instance, one can apply spectral clustering also on a graph adjacency). Then isn't it better to run any other matrix completion approach that does not impose psd?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not discuss any limitation, and the reported experimental results suggest that the proposed methods clearly outperform existing methods.
However, as noted in weakness, the paper needs comparison with existing kernel completion works and needs discussion on what kind of missing data can drastically impact the proposed methods.
Possible negative societal impact is neither evident nor discussed in the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer 7HKb**
Thanks very much for your detailed feedback on the contributions of our work. We are delighted to hear that you found *the proposed approach is technically sound* and *the experiments show clear improvement over existing approaches*. Your comments are greatly appreciated and will help us to further improve our work, especially for the presentation of the paper.
Due to technical constraints, we are unable to upload the updated version of our paper at this point. Therefore, we will further polish it in the new version.
---
**Comment 1**: Few papers are noted but the literature is quite large, and it is not clear why the paper does not compare with such approaches.
**Response 1**: Thanks for your feedback. We do realize the literature in kernel learning with missing data under specific settings as in your recommended papers. Meanwhile, our work addresses **complete but inaccurate** (noisy) kernels due to the presence of incomplete observations, which is fundamentally different from most work in the literature primarily dealing with **incomplete** kernels.
---
**Comment 2**: A general guarantee cannot be provided without assumptions. However, the work would significantly improve if there is a guarantee on recovery of ground truth kernel assuming that incomplete kernel matrix is obtained from data with features missing uniformly at random.
**Response 2**: To clarify, Theorem 1 resides on a mild assumption that the true kernel matrix is PSD, which is typically assumed in previous studies. It asserts that if the initial kernel $K^0$ (complete but inaccurate) is not a PSD matrix, we can correct it to a closer estimate $\hat{K}$ to the unknown ground-truth $K^*$, providing a solid guarantee on the corrected kernel, i.e., $||K^*-\hat{K}||_F \le ||K^*-K^0||_F$. In fact, we can even provide a tighter performance bound for $\hat{K}$ with $||K^*-\hat{K}||_F \le 2||K^*-K^0||_2$, where the proof will be included in the revision.
---
**Comment 3**: Extensive experiments are needed to demonstrate that the proposed methods are robust to different types of missing entries, other than uniformly random.
**Response 3**: As you suggested, we have conducted additional experiments with different missing mechanisms, in addition to the setting of missing completely at random (MCAR) in our paper. For a given missing ratio, we generate a block of appropriate sizes located randomly in images and values in the block are missing where the missingness is systematically related to the location. Table 1 shows that our KC algorithm consistently outperforms existing data imputation and distance calibration methods under this block-missing mechansim. More results will be included in our further revision.
**Table 1**: Performance of standard spectral clustering (SC) and self-expressive affinity learning (KSSC and KLSR) on the incomplete Yale64 dataset with the block-missing mechanism under a missing ratio of 80%.
|Method|ZERO|MEAN|kNN|EM|SVT|FNNM|GR|KFMC|DC|EE|KC|
|-----------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-----------------|
|SC-ACC|0.273|0.475|0.297|0.472|0.301|0.502|0.271|0.464|0.410|0.532|**0.535**|
|SC-NMI|0.330|0.518|0.351|0.517|0.355|0.536|0.329|0.506|0.466|0.558|**0.562**|
|SC-PUR|0.289|0.492|0.312|0.492|0.319|0.518|0.284|0.476|0.433|0.545|**0.547**|
|KSSC-ACC|0.278|0.473|0.308|0.479|0.311|0.520|0.281|0.439|0.410|0.192|**0.533**|
|KSSC-NMI|0.340|0.526|0.379|0.530|0.376|0.555|0.342|0.496|0.482|0.212|**0.562**|
|KSSC-PUR|0.299|0.498|0.329|0.505|0.330|0.541|0.299|0.468|0.441|0.198|**0.542**|
|KLSR-ACC|0.274|0.516|0.304|0.500|0.309|0.544|0.276|0.475|0.419|0.558|**0.568**|
|KLSR-NMI|0.336|0.544|0.365|0.534|0.365|0.572|0.338|0.533|0.484|0.585|**0.592**|
|KLSR-PUR|0.289|0.529|0.319|0.512|0.325|0.556|0.292|0.493|0.443|0.573|**0.579**|
---
**Comment 4**: 1) The fundamental goal of the paper seems to be kernel completion. 2) What is special about spectral clustering (or its affinity learning) in this context? 3) Why cannot one use the methods for supervised learning?
**Response 4**: **1)** Our work diverges from traditional kernel completion and aims at refining a complete yet inaccurate kernel matrix to yield a more precise estimate. **2)** The performance of spectral clustering algorithms relies heavily on the quality of the affinity matrix, often defined by kernels or self-expressive affinity. Our proposed algorithms are tailored to improve the affinity quality, rendering them particularly suitable for spectral clustering. **3)** Thanks for your suggestion. We agree the approach can be potentially applied to supervised learning tasks, such as nearest neighbor classification and information retrieval tasks. More work will be carried out along this line.
---
**Comment 5**: 1) It is well known that spectral clustering does not require a positive semi-definite kernel. 2) Then isn't it better to run any other matrix completion approach that does not impose psd?
**Response 5**: **1)** As you pointed out, spectral clustering does not necessitate the input to be a PSD kernel. Meanwhile in practice, spectral clustering often employs a Gaussian kernel, which is inherently PSD. Thus, our work adhered to the PSD constraints for the kernel, recognizing that other constraints may warrant further exploration. **2)** In our experimental evaluation, we conducted comparisons with four prevalent matrix completion methods for handling missing data, namely SVT, FNNM, GR, and KFMC. The results showed the superiority of our algorithm, affirming the validity of our approach within the context of spectral clustering.
---
**Comment 6**: The paper does not discuss any limitation.
**Response 6**: Thanks for the suggestion. We will include a discussion of limitation in the revision.
---
Rebuttal Comment 1.1:
Title: Continuing discussions on Responses 3 and 6
Comment: I thank the authors for the response, and for conducting new experiments. I would like to engage bit more on Comments 3,6.
- Please elaborate on Response 6, particularly what you see as potential limitations.
- The reason for my comment that the theory is weak is because the guarantee is $\Vert K^* - \hat{K}\Vert_F \leq \Vert K^* - K_0\Vert_F$, that is, in the worst case, the method need not give a better solution than $K_0$ (hence, its significance is not clear from a theoretical perspective). Hence, my comment was, under some assumptions, is the solution strictly better than $K-0$? More precisely, can one show $\Vert K^* - \hat{K}\Vert_F \leq c\Vert K^* - K_0\Vert_F$ where $c\ll1$?
---
Reply to Comment 1.1.1:
Title: More Detailed Responses to Limitation and Theory
Comment: Thanks for your prompt response. We greatly appreciate your willingness to engage in further discussions and delve into the finer details.
---
**Response to Limitation:**
The potential limitations primarily stem from the time complexity of the method. The pre-iteration time complexity of the KC method is currently at $O(n^3)$, which poses challenges when dealing with large-scale datasets. To address this issue, a possible solution is to replace the spectral decomposition with a randomized singular value decomposition [1] (as mentioned in Section 3.3). This approach seeks top-$k$ singular values and effectively reduces the time complexity to $O(n^2 \cdot \log(k) + 2n \cdot k^2)$. However, the trade-off between efficiency and efficacy necessitates further investigation.
[1] Halko N., et al. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM Review, 2011.
---
**Response to Theory:**
We have indeed taken into account the considerations you mentioned and conducted empirical results. Theoretically, if and only if $K^0$ is non-PSD, our estimate $\hat{K}$ is a better one in the sense that $||K^*-\hat{K}||_F < ||K^*-K^0||_F$.
**1)** In the setting of missing completely at random, we generated a random matrix $X \in \mathbb{R}^{d \times n}$ with $x_{ij} \underset{\sim}{i.i.d.} \mathcal{N}(0,1)$ and presented the values of $\text{Prob}(K^0 \succeq 0|d,n,r)$ in **Figure 1. (d1, d2) in Appendix D.1**. Our observations are as follows:
+ When $d/n$ is relatively small, the initial kernel matrix $K^0$ is non-PSD (with a high probability), resulting in $||K^*-\hat{K}||_F < ||K^*-K^0||_F$.
+ When $(d,n)$ is given, a larger missing ratio $r$ would more likely result in a non-PSD $K^0$ so that $||K^*-\hat{K}||_F < ||K^*-K^0||_F$.
**2)** The improvement from $K^0$ to $\hat{K}$ can be quantified by the relative-mean-square error ($\text{RMSE} = \frac{||K^*-\hat{K}||_F^2}{||K^*-K^0||_F^2}$), which shares similar results with the RMSE of Euclidean distance, as presented in our paper. Both **Figure 2 in the main text** and **Figure 2. (d1, d2) in Appendix D.2** showed that the RMSE of Euclidean distance can be significantlly smaller than 1 ($c\ll 1$), particularly for larger size $n$, larger missing ratio $r$, or smaller dimension $d$. This finding aligns with the RMSE results for kernels. For example, RMSE of kernel is about 0.22 in the USPS dataset with a missing ratio 80%. We will incorporate these into the revised version.
**3)** Regarding upper bound for $||K^*-\hat{K}||_F$, we actually have built a tighter bound in the Response 2, i.e., $||K^*-\hat{K}||_F \le 2||K^*-K^0||_2$, where the spectral norm $||\cdot||_2$ is much smaller than the Frobenius norm $||\cdot||_F$ in practice, providing an improvement in the worst case. In fact, we have been considering this theoretical upper bound for a long time, but have not found a good solution because we do not make any distribution assumptions about $X$, such as low rank. We will continue to work on the theoretical bound of this method in the future. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.